Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1453682 Details for
Bug 1594176
Error response from daemon: No such container: ceph-mon-controller-0
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
/var/lib/mistral/xyz/ansible.log
ansible.log (text/plain), 2.47 MB, created by
Filip Hubík
on 2018-06-22 10:27:11 UTC
(
hide
)
Description:
/var/lib/mistral/xyz/ansible.log
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-06-22 10:27:11 UTC
Size:
2.47 MB
patch
obsolete
>2018-06-22 04:48:34,194 p=11115 u=mistral | Using /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ansible.cfg as config file >2018-06-22 04:48:34,833 p=11115 u=mistral | PLAY [Gather facts from undercloud] ******************************************** >2018-06-22 04:48:34,843 p=11115 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-06-22 04:48:35,600 p=11115 u=mistral | ok: [undercloud] >2018-06-22 04:48:35,615 p=11115 u=mistral | PLAY [Gather facts from overcloud] ********************************************* >2018-06-22 04:48:35,623 p=11115 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-06-22 04:48:38,668 p=11115 u=mistral | ok: [compute-0] >2018-06-22 04:48:38,790 p=11115 u=mistral | ok: [controller-0] >2018-06-22 04:48:38,942 p=11115 u=mistral | ok: [ceph-0] >2018-06-22 04:48:38,958 p=11115 u=mistral | PLAY [Load global variables] *************************************************** >2018-06-22 04:48:38,979 p=11115 u=mistral | TASK [include_vars] ************************************************************ >2018-06-22 04:48:39,036 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.17,ceph-0.localdomain,ceph-0,172.17.3.17,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.13,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.13,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.13,ceph-0.external.localdomain,ceph-0.external,192.168.24.13,ceph-0.management.localdomain,ceph-0.management,192.168.24.13,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.14,compute-0.localdomain,compute-0,172.17.3.13,compute-0.storage.localdomain,compute-0.storage,192.168.24.16,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.14,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.15,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.16,compute-0.external.localdomain,compute-0.external,192.168.24.16,compute-0.management.localdomain,compute-0.management,192.168.24.16,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.10,controller-0.localdomain,controller-0,172.17.3.11,controller-0.storage.localdomain,controller-0.storage,172.17.4.19,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.10,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.12,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.111,controller-0.external.localdomain,controller-0.external,192.168.24.12,controller-0.management.localdomain,controller-0.management,192.168.24.12,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/global_vars.yaml"], "changed": false} >2018-06-22 04:48:39,063 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.17,ceph-0.localdomain,ceph-0,172.17.3.17,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.13,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.13,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.13,ceph-0.external.localdomain,ceph-0.external,192.168.24.13,ceph-0.management.localdomain,ceph-0.management,192.168.24.13,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.14,compute-0.localdomain,compute-0,172.17.3.13,compute-0.storage.localdomain,compute-0.storage,192.168.24.16,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.14,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.15,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.16,compute-0.external.localdomain,compute-0.external,192.168.24.16,compute-0.management.localdomain,compute-0.management,192.168.24.16,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.10,controller-0.localdomain,controller-0,172.17.3.11,controller-0.storage.localdomain,controller-0.storage,172.17.4.19,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.10,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.12,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.111,controller-0.external.localdomain,controller-0.external,192.168.24.12,controller-0.management.localdomain,controller-0.management,192.168.24.12,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/global_vars.yaml"], "changed": false} >2018-06-22 04:48:39,064 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.17,ceph-0.localdomain,ceph-0,172.17.3.17,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.13,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.13,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.13,ceph-0.external.localdomain,ceph-0.external,192.168.24.13,ceph-0.management.localdomain,ceph-0.management,192.168.24.13,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.14,compute-0.localdomain,compute-0,172.17.3.13,compute-0.storage.localdomain,compute-0.storage,192.168.24.16,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.14,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.15,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.16,compute-0.external.localdomain,compute-0.external,192.168.24.16,compute-0.management.localdomain,compute-0.management,192.168.24.16,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.10,controller-0.localdomain,controller-0,172.17.3.11,controller-0.storage.localdomain,controller-0.storage,172.17.4.19,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.10,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.12,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.111,controller-0.external.localdomain,controller-0.external,192.168.24.12,controller-0.management.localdomain,controller-0.management,192.168.24.12,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/global_vars.yaml"], "changed": false} >2018-06-22 04:48:39,095 p=11115 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.17,ceph-0.localdomain,ceph-0,172.17.3.17,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.13,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.13,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.13,ceph-0.external.localdomain,ceph-0.external,192.168.24.13,ceph-0.management.localdomain,ceph-0.management,192.168.24.13,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.14,compute-0.localdomain,compute-0,172.17.3.13,compute-0.storage.localdomain,compute-0.storage,192.168.24.16,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.14,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.15,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.16,compute-0.external.localdomain,compute-0.external,192.168.24.16,compute-0.management.localdomain,compute-0.management,192.168.24.16,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.10,controller-0.localdomain,controller-0,172.17.3.11,controller-0.storage.localdomain,controller-0.storage,172.17.4.19,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.10,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.12,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.111,controller-0.external.localdomain,controller-0.external,192.168.24.12,controller-0.management.localdomain,controller-0.management,192.168.24.12,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/global_vars.yaml"], "changed": false} >2018-06-22 04:48:39,104 p=11115 u=mistral | PLAY [Common roles for TripleO servers] **************************************** >2018-06-22 04:48:39,125 p=11115 u=mistral | TASK [tripleo-bootstrap : Deploy required packages to bootstrap TripleO] ******* >2018-06-22 04:48:40,011 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-22 04:48:40,022 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-22 04:48:40,030 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-22 04:48:40,049 p=11115 u=mistral | TASK [tripleo-bootstrap : Create /var/lib/heat-config/tripleo-config-download directory for deployment data] *** >2018-06-22 04:48:40,523 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:48:40,529 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:48:40,538 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:48:40,558 p=11115 u=mistral | TASK [tripleo-ssh-known-hosts : Template /etc/ssh/ssh_known_hosts] ************* >2018-06-22 04:48:41,543 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "2c8e744bc1907107e458e1816a5faba8c339f99d", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "f0cc6849e574558dd19fb2e0f080dcbf", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1908, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657320.64-280313311890357/source", "state": "file", "uid": 0} >2018-06-22 04:48:41,551 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "2c8e744bc1907107e458e1816a5faba8c339f99d", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "f0cc6849e574558dd19fb2e0f080dcbf", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1908, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657320.6-214476059991605/source", "state": "file", "uid": 0} >2018-06-22 04:48:41,557 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "2c8e744bc1907107e458e1816a5faba8c339f99d", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "f0cc6849e574558dd19fb2e0f080dcbf", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1908, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657320.63-24887381873841/source", "state": "file", "uid": 0} >2018-06-22 04:48:41,564 p=11115 u=mistral | PLAY [Overcloud deploy step tasks for step 0] ********************************** >2018-06-22 04:48:41,587 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:48:41,614 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,636 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,647 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,668 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:48:41,694 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,718 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,728 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,748 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:48:41,776 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,798 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,811 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,832 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:48:41,859 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,881 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,892 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,913 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:48:41,937 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,963 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,974 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:41,979 p=11115 u=mistral | PLAY [Server deployments] ****************************************************** >2018-06-22 04:48:42,002 p=11115 u=mistral | TASK [include] ***************************************************************** >2018-06-22 04:48:42,215 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Controller/deployments.yaml for controller-0 >2018-06-22 04:48:42,223 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Controller/deployments.yaml for controller-0 >2018-06-22 04:48:42,230 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Controller/deployments.yaml for controller-0 >2018-06-22 04:48:42,238 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Controller/deployments.yaml for controller-0 >2018-06-22 04:48:42,246 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Controller/deployments.yaml for controller-0 >2018-06-22 04:48:42,254 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Controller/deployments.yaml for controller-0 >2018-06-22 04:48:42,263 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Controller/deployments.yaml for controller-0 >2018-06-22 04:48:42,270 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Controller/deployments.yaml for controller-0 >2018-06-22 04:48:42,292 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:48:42,350 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "5ac903b7-9808-4d47-bc04-5a0642353924"}, "changed": false} >2018-06-22 04:48:42,372 p=11115 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-22 04:48:42,995 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "fee5e97093afde5178f9b9a7e5e3bf67d42f6fdb", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-5ac903b7-9808-4d47-bc04-5a0642353924", "gid": 0, "group": "root", "md5sum": "191bfb7e6077417df6d7235d3aaf2172", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657322.43-141446888109558/source", "state": "file", "uid": 0} >2018-06-22 04:48:43,021 p=11115 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-22 04:48:43,352 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:48:43,377 p=11115 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-22 04:48:43,395 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:43,416 p=11115 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-22 04:48:43,432 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:43,455 p=11115 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-22 04:48:43,469 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:48:43,491 p=11115 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-22 04:49:12,569 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.notify.json)", "delta": "0:00:28.582440", "end": "2018-06-22 04:49:12.548620", "rc": 0, "start": "2018-06-22 04:48:43.966180", "stderr": "[2018-06-22 04:48:43,989] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.json\n[2018-06-22 04:49:12,154] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.111/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.111/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 04:48:44 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 04:48:44 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 04:48:44 AM] [INFO] Not using any mapping file.\\n[2018/06/22 04:48:44 AM] [INFO] Finding active nics\\n[2018/06/22 04:48:44 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 04:48:44 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 04:48:44 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 04:48:44 AM] [INFO] lo is not an active nic\\n[2018/06/22 04:48:44 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 04:48:44 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 04:48:44 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 04:48:44 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 04:48:44 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth0\\n[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth1\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-ex\\n[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth2\\n[2018/06/22 04:48:44 AM] [INFO] applying network configs...\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 04:48:50 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 04:48:54 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 04:48:58 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:49:02 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:49:06 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-22 04:49:12,154] (heat-config) [DEBUG] [2018-06-22 04:48:44,010] (heat-config) [INFO] interface_name=nic1\n[2018-06-22 04:48:44,010] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-22 04:48:44,010] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1\n[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-gr4fhevw7mwd-0-u7fw5x6qbc5z-NetworkDeployment-fvhzbouufpoh-TripleOSoftwareDeployment-wvmdaocwtp2c/327d5572-eb54-436b-b3e7-c8cd60e05851\n[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:48:44,011] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5ac903b7-9808-4d47-bc04-5a0642353924\n[2018-06-22 04:49:12,150] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-22 04:49:12,150] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.111/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.111/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/22 04:48:44 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/22 04:48:44 AM] [INFO] Ifcfg net config provider created.\n[2018/06/22 04:48:44 AM] [INFO] Not using any mapping file.\n[2018/06/22 04:48:44 AM] [INFO] Finding active nics\n[2018/06/22 04:48:44 AM] [INFO] eth0 is an embedded active nic\n[2018/06/22 04:48:44 AM] [INFO] eth1 is an embedded active nic\n[2018/06/22 04:48:44 AM] [INFO] eth2 is an embedded active nic\n[2018/06/22 04:48:44 AM] [INFO] lo is not an active nic\n[2018/06/22 04:48:44 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/22 04:48:44 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/22 04:48:44 AM] [INFO] nic3 mapped to: eth2\n[2018/06/22 04:48:44 AM] [INFO] nic2 mapped to: eth1\n[2018/06/22 04:48:44 AM] [INFO] nic1 mapped to: eth0\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth0\n[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: eth0\n[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-isolated\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth1\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan20\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan30\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan40\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan50\n[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-ex\n[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: br-ex\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth2\n[2018/06/22 04:48:44 AM] [INFO] applying network configs...\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth2\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth1\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth0\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-ex\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-ex\n[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth2\n[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth1\n[2018/06/22 04:48:50 AM] [INFO] running ifup on interface: eth0\n[2018/06/22 04:48:54 AM] [INFO] running ifup on interface: vlan50\n[2018/06/22 04:48:58 AM] [INFO] running ifup on interface: vlan20\n[2018/06/22 04:49:02 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 04:49:06 AM] [INFO] running ifup on interface: vlan40\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan20\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan40\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-22 04:49:12,150] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5ac903b7-9808-4d47-bc04-5a0642353924\n\n[2018-06-22 04:49:12,155] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:49:12,156] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.json < /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.notify.json\n[2018-06-22 04:49:12,542] (heat-config) [INFO] \n[2018-06-22 04:49:12,542] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:48:43,989] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.json", "[2018-06-22 04:49:12,154] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.111/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.111/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 04:48:44 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 04:48:44 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 04:48:44 AM] [INFO] Not using any mapping file.\\n[2018/06/22 04:48:44 AM] [INFO] Finding active nics\\n[2018/06/22 04:48:44 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 04:48:44 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 04:48:44 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 04:48:44 AM] [INFO] lo is not an active nic\\n[2018/06/22 04:48:44 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 04:48:44 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 04:48:44 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 04:48:44 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 04:48:44 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth0\\n[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth1\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-ex\\n[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth2\\n[2018/06/22 04:48:44 AM] [INFO] applying network configs...\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 04:48:50 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 04:48:54 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 04:48:58 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:49:02 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:49:06 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-22 04:49:12,154] (heat-config) [DEBUG] [2018-06-22 04:48:44,010] (heat-config) [INFO] interface_name=nic1", "[2018-06-22 04:48:44,010] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-22 04:48:44,010] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", "[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-gr4fhevw7mwd-0-u7fw5x6qbc5z-NetworkDeployment-fvhzbouufpoh-TripleOSoftwareDeployment-wvmdaocwtp2c/327d5572-eb54-436b-b3e7-c8cd60e05851", "[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:48:44,011] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5ac903b7-9808-4d47-bc04-5a0642353924", "[2018-06-22 04:49:12,150] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-22 04:49:12,150] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.111/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.111/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/22 04:48:44 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/22 04:48:44 AM] [INFO] Ifcfg net config provider created.", "[2018/06/22 04:48:44 AM] [INFO] Not using any mapping file.", "[2018/06/22 04:48:44 AM] [INFO] Finding active nics", "[2018/06/22 04:48:44 AM] [INFO] eth0 is an embedded active nic", "[2018/06/22 04:48:44 AM] [INFO] eth1 is an embedded active nic", "[2018/06/22 04:48:44 AM] [INFO] eth2 is an embedded active nic", "[2018/06/22 04:48:44 AM] [INFO] lo is not an active nic", "[2018/06/22 04:48:44 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/22 04:48:44 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/22 04:48:44 AM] [INFO] nic3 mapped to: eth2", "[2018/06/22 04:48:44 AM] [INFO] nic2 mapped to: eth1", "[2018/06/22 04:48:44 AM] [INFO] nic1 mapped to: eth0", "[2018/06/22 04:48:44 AM] [INFO] adding interface: eth0", "[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: eth0", "[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-isolated", "[2018/06/22 04:48:44 AM] [INFO] adding interface: eth1", "[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan20", "[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan30", "[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan40", "[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan50", "[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-ex", "[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: br-ex", "[2018/06/22 04:48:44 AM] [INFO] adding interface: eth2", "[2018/06/22 04:48:44 AM] [INFO] applying network configs...", "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth2", "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth1", "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth0", "[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-ex", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-ex", "[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth2", "[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth1", "[2018/06/22 04:48:50 AM] [INFO] running ifup on interface: eth0", "[2018/06/22 04:48:54 AM] [INFO] running ifup on interface: vlan50", "[2018/06/22 04:48:58 AM] [INFO] running ifup on interface: vlan20", "[2018/06/22 04:49:02 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 04:49:06 AM] [INFO] running ifup on interface: vlan40", "[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan20", "[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan40", "[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-22 04:49:12,150] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5ac903b7-9808-4d47-bc04-5a0642353924", "", "[2018-06-22 04:49:12,155] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:49:12,156] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.json < /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.notify.json", "[2018-06-22 04:49:12,542] (heat-config) [INFO] ", "[2018-06-22 04:49:12,542] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:49:12,592 p=11115 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-22 04:49:12,652 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:48:43,989] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.json", > "[2018-06-22 04:49:12,154] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.111/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.111/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 04:48:44 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 04:48:44 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 04:48:44 AM] [INFO] Not using any mapping file.\\n[2018/06/22 04:48:44 AM] [INFO] Finding active nics\\n[2018/06/22 04:48:44 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 04:48:44 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 04:48:44 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 04:48:44 AM] [INFO] lo is not an active nic\\n[2018/06/22 04:48:44 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 04:48:44 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 04:48:44 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 04:48:44 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 04:48:44 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth0\\n[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth1\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-ex\\n[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/22 04:48:44 AM] [INFO] adding interface: eth2\\n[2018/06/22 04:48:44 AM] [INFO] applying network configs...\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 04:48:50 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 04:48:54 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 04:48:58 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:49:02 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:49:06 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 04:49:12,154] (heat-config) [DEBUG] [2018-06-22 04:48:44,010] (heat-config) [INFO] interface_name=nic1", > "[2018-06-22 04:48:44,010] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-22 04:48:44,010] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", > "[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-gr4fhevw7mwd-0-u7fw5x6qbc5z-NetworkDeployment-fvhzbouufpoh-TripleOSoftwareDeployment-wvmdaocwtp2c/327d5572-eb54-436b-b3e7-c8cd60e05851", > "[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:48:44,011] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:48:44,011] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5ac903b7-9808-4d47-bc04-5a0642353924", > "[2018-06-22 04:49:12,150] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-22 04:49:12,150] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.111/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.111/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/22 04:48:44 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/22 04:48:44 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/22 04:48:44 AM] [INFO] Not using any mapping file.", > "[2018/06/22 04:48:44 AM] [INFO] Finding active nics", > "[2018/06/22 04:48:44 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/22 04:48:44 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/22 04:48:44 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/22 04:48:44 AM] [INFO] lo is not an active nic", > "[2018/06/22 04:48:44 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/22 04:48:44 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/22 04:48:44 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/22 04:48:44 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/22 04:48:44 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/22 04:48:44 AM] [INFO] adding interface: eth0", > "[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-isolated", > "[2018/06/22 04:48:44 AM] [INFO] adding interface: eth1", > "[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan20", > "[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan30", > "[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan40", > "[2018/06/22 04:48:44 AM] [INFO] adding vlan: vlan50", > "[2018/06/22 04:48:44 AM] [INFO] adding bridge: br-ex", > "[2018/06/22 04:48:44 AM] [INFO] adding custom route for interface: br-ex", > "[2018/06/22 04:48:44 AM] [INFO] adding interface: eth2", > "[2018/06/22 04:48:44 AM] [INFO] applying network configs...", > "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth2", > "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/22 04:48:44 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 04:48:45 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/22 04:48:45 AM] [INFO] running ifdown on bridge: br-ex", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/22 04:48:45 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/22 04:48:45 AM] [INFO] running ifup on bridge: br-ex", > "[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth2", > "[2018/06/22 04:48:49 AM] [INFO] running ifup on interface: eth1", > "[2018/06/22 04:48:50 AM] [INFO] running ifup on interface: eth0", > "[2018/06/22 04:48:54 AM] [INFO] running ifup on interface: vlan50", > "[2018/06/22 04:48:58 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/22 04:49:02 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 04:49:06 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/22 04:49:11 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-22 04:49:12,150] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5ac903b7-9808-4d47-bc04-5a0642353924", > "", > "[2018-06-22 04:49:12,155] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:49:12,156] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.json < /var/lib/heat-config/deployed/5ac903b7-9808-4d47-bc04-5a0642353924.notify.json", > "[2018-06-22 04:49:12,542] (heat-config) [INFO] ", > "[2018-06-22 04:49:12,542] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:49:12,678 p=11115 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-22 04:49:12,693 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:12,715 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:49:12,763 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "83e7feb7-b42d-4e84-9143-882cd270634f"}, "changed": false} >2018-06-22 04:49:12,785 p=11115 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment] ************** >2018-06-22 04:49:13,435 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "577ef5e6cc879e76a344fb9cd3aef9a54c43e089", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerUpgradeInitDeployment-83e7feb7-b42d-4e84-9143-882cd270634f", "gid": 0, "group": "root", "md5sum": "e926a8f88c3f4875cbf8ece757fcaf87", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1183, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657352.83-211555661855499/source", "state": "file", "uid": 0} >2018-06-22 04:49:13,458 p=11115 u=mistral | TASK [Check if deployed file exists for ControllerUpgradeInitDeployment] ******* >2018-06-22 04:49:13,805 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:49:13,830 p=11115 u=mistral | TASK [Check previous deployment rc for ControllerUpgradeInitDeployment] ******** >2018-06-22 04:49:13,848 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:13,870 p=11115 u=mistral | TASK [Remove deployed file for ControllerUpgradeInitDeployment when previous deployment failed] *** >2018-06-22 04:49:13,887 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:13,909 p=11115 u=mistral | TASK [Force remove deployed file for ControllerUpgradeInitDeployment] ********** >2018-06-22 04:49:13,924 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:13,946 p=11115 u=mistral | TASK [Run deployment ControllerUpgradeInitDeployment] ************************** >2018-06-22 04:49:14,756 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.notify.json)", "delta": "0:00:00.450978", "end": "2018-06-22 04:49:14.752070", "rc": 0, "start": "2018-06-22 04:49:14.301092", "stderr": "[2018-06-22 04:49:14,326] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.json\n[2018-06-22 04:49:14,353] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:49:14,353] (heat-config) [DEBUG] [2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1\n[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-gr4fhevw7mwd-0-u7fw5x6qbc5z-ControllerUpgradeInitDeployment-2tm32b4hyvyc/13adad67-b436-470b-baf2-4a73c63f5b34\n[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:49:14,346] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/83e7feb7-b42d-4e84-9143-882cd270634f\n[2018-06-22 04:49:14,350] (heat-config) [INFO] \n[2018-06-22 04:49:14,350] (heat-config) [DEBUG] \n[2018-06-22 04:49:14,350] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/83e7feb7-b42d-4e84-9143-882cd270634f\n\n[2018-06-22 04:49:14,353] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:49:14,354] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.json < /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.notify.json\n[2018-06-22 04:49:14,745] (heat-config) [INFO] \n[2018-06-22 04:49:14,745] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:49:14,326] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.json", "[2018-06-22 04:49:14,353] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:49:14,353] (heat-config) [DEBUG] [2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", "[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-gr4fhevw7mwd-0-u7fw5x6qbc5z-ControllerUpgradeInitDeployment-2tm32b4hyvyc/13adad67-b436-470b-baf2-4a73c63f5b34", "[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:49:14,346] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/83e7feb7-b42d-4e84-9143-882cd270634f", "[2018-06-22 04:49:14,350] (heat-config) [INFO] ", "[2018-06-22 04:49:14,350] (heat-config) [DEBUG] ", "[2018-06-22 04:49:14,350] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/83e7feb7-b42d-4e84-9143-882cd270634f", "", "[2018-06-22 04:49:14,353] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:49:14,354] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.json < /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.notify.json", "[2018-06-22 04:49:14,745] (heat-config) [INFO] ", "[2018-06-22 04:49:14,745] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:49:14,780 p=11115 u=mistral | TASK [Output for ControllerUpgradeInitDeployment] ****************************** >2018-06-22 04:49:14,827 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:49:14,326] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.json", > "[2018-06-22 04:49:14,353] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:49:14,353] (heat-config) [DEBUG] [2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", > "[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-gr4fhevw7mwd-0-u7fw5x6qbc5z-ControllerUpgradeInitDeployment-2tm32b4hyvyc/13adad67-b436-470b-baf2-4a73c63f5b34", > "[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:49:14,345] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:49:14,346] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/83e7feb7-b42d-4e84-9143-882cd270634f", > "[2018-06-22 04:49:14,350] (heat-config) [INFO] ", > "[2018-06-22 04:49:14,350] (heat-config) [DEBUG] ", > "[2018-06-22 04:49:14,350] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/83e7feb7-b42d-4e84-9143-882cd270634f", > "", > "[2018-06-22 04:49:14,353] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:49:14,354] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.json < /var/lib/heat-config/deployed/83e7feb7-b42d-4e84-9143-882cd270634f.notify.json", > "[2018-06-22 04:49:14,745] (heat-config) [INFO] ", > "[2018-06-22 04:49:14,745] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:49:14,849 p=11115 u=mistral | TASK [Check-mode for Run deployment ControllerUpgradeInitDeployment] *********** >2018-06-22 04:49:14,864 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:14,887 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:49:15,260 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "ecff717d-fed5-4973-b189-81b13931b912"}, "changed": false} >2018-06-22 04:49:15,283 p=11115 u=mistral | TASK [Render deployment file for ControllerDeployment] ************************* >2018-06-22 04:49:16,260 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "35355042b0ff5a034b8a07a8a3b8f11a436662ef", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerDeployment-ecff717d-fed5-4973-b189-81b13931b912", "gid": 0, "group": "root", "md5sum": "4cb1c8edac9c37a76ef1ae51d07f72e3", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 73460, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657355.68-249195527123389/source", "state": "file", "uid": 0} >2018-06-22 04:49:16,282 p=11115 u=mistral | TASK [Check if deployed file exists for ControllerDeployment] ****************** >2018-06-22 04:49:16,622 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:49:16,647 p=11115 u=mistral | TASK [Check previous deployment rc for ControllerDeployment] ******************* >2018-06-22 04:49:16,663 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:16,686 p=11115 u=mistral | TASK [Remove deployed file for ControllerDeployment when previous deployment failed] *** >2018-06-22 04:49:16,703 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:16,727 p=11115 u=mistral | TASK [Force remove deployed file for ControllerDeployment] ********************* >2018-06-22 04:49:16,744 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:16,767 p=11115 u=mistral | TASK [Run deployment ControllerDeployment] ************************************* >2018-06-22 04:49:17,699 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.notify.json)", "delta": "0:00:00.552997", "end": "2018-06-22 04:49:17.702446", "rc": 0, "start": "2018-06-22 04:49:17.149449", "stderr": "[2018-06-22 04:49:17,180] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.json\n[2018-06-22 04:49:17,295] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:49:17,295] (heat-config) [DEBUG] \n[2018-06-22 04:49:17,295] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 04:49:17,295] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.json < /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.notify.json\n[2018-06-22 04:49:17,695] (heat-config) [INFO] \n[2018-06-22 04:49:17,695] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:49:17,180] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.json", "[2018-06-22 04:49:17,295] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:49:17,295] (heat-config) [DEBUG] ", "[2018-06-22 04:49:17,295] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 04:49:17,295] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.json < /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.notify.json", "[2018-06-22 04:49:17,695] (heat-config) [INFO] ", "[2018-06-22 04:49:17,695] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:49:17,725 p=11115 u=mistral | TASK [Output for ControllerDeployment] ***************************************** >2018-06-22 04:49:17,774 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:49:17,180] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.json", > "[2018-06-22 04:49:17,295] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:49:17,295] (heat-config) [DEBUG] ", > "[2018-06-22 04:49:17,295] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 04:49:17,295] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.json < /var/lib/heat-config/deployed/ecff717d-fed5-4973-b189-81b13931b912.notify.json", > "[2018-06-22 04:49:17,695] (heat-config) [INFO] ", > "[2018-06-22 04:49:17,695] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:49:17,800 p=11115 u=mistral | TASK [Check-mode for Run deployment ControllerDeployment] ********************** >2018-06-22 04:49:17,815 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:17,839 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:49:17,894 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "31df93a5-d855-4383-b908-cd27426083a4"}, "changed": false} >2018-06-22 04:49:17,919 p=11115 u=mistral | TASK [Render deployment file for ControllerHostsDeployment] ******************** >2018-06-22 04:49:18,498 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "54508f1aa1b6e38ee9a934767cea261766c3612a", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostsDeployment-31df93a5-d855-4383-b908-cd27426083a4", "gid": 0, "group": "root", "md5sum": "65e8b32f42e29b782a0bc271de131064", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4086, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657357.97-11443774250503/source", "state": "file", "uid": 0} >2018-06-22 04:49:18,522 p=11115 u=mistral | TASK [Check if deployed file exists for ControllerHostsDeployment] ************* >2018-06-22 04:49:18,876 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:49:18,901 p=11115 u=mistral | TASK [Check previous deployment rc for ControllerHostsDeployment] ************** >2018-06-22 04:49:18,918 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:18,942 p=11115 u=mistral | TASK [Remove deployed file for ControllerHostsDeployment when previous deployment failed] *** >2018-06-22 04:49:18,960 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:18,986 p=11115 u=mistral | TASK [Force remove deployed file for ControllerHostsDeployment] **************** >2018-06-22 04:49:19,001 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:19,025 p=11115 u=mistral | TASK [Run deployment ControllerHostsDeployment] ******************************** >2018-06-22 04:49:19,892 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.notify.json)", "delta": "0:00:00.462606", "end": "2018-06-22 04:49:19.866574", "rc": 0, "start": "2018-06-22 04:49:19.403968", "stderr": "[2018-06-22 04:49:19,427] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.json\n[2018-06-22 04:49:19,463] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-22 04:49:19,463] (heat-config) [DEBUG] [2018-06-22 04:49:19,447] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1\n[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-agb42nrtvq3b-0-xivu2p77regj/e13e3f9e-d393-402a-b0f1-dbe701537b4c\n[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:49:19,447] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/31df93a5-d855-4383-b908-cd27426083a4\n[2018-06-22 04:49:19,459] (heat-config) [INFO] \n[2018-06-22 04:49:19,460] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-22 04:49:19,460] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/31df93a5-d855-4383-b908-cd27426083a4\n\n[2018-06-22 04:49:19,463] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:49:19,464] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.json < /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.notify.json\n[2018-06-22 04:49:19,859] (heat-config) [INFO] \n[2018-06-22 04:49:19,859] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:49:19,427] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.json", "[2018-06-22 04:49:19,463] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-22 04:49:19,463] (heat-config) [DEBUG] [2018-06-22 04:49:19,447] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-agb42nrtvq3b-0-xivu2p77regj/e13e3f9e-d393-402a-b0f1-dbe701537b4c", "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:49:19,447] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/31df93a5-d855-4383-b908-cd27426083a4", "[2018-06-22 04:49:19,459] (heat-config) [INFO] ", "[2018-06-22 04:49:19,460] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-22 04:49:19,460] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/31df93a5-d855-4383-b908-cd27426083a4", "", "[2018-06-22 04:49:19,463] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:49:19,464] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.json < /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.notify.json", "[2018-06-22 04:49:19,859] (heat-config) [INFO] ", "[2018-06-22 04:49:19,859] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:49:19,924 p=11115 u=mistral | TASK [Output for ControllerHostsDeployment] ************************************ >2018-06-22 04:49:20,039 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:49:19,427] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.json", > "[2018-06-22 04:49:19,463] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 04:49:19,463] (heat-config) [DEBUG] [2018-06-22 04:49:19,447] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", > "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-agb42nrtvq3b-0-xivu2p77regj/e13e3f9e-d393-402a-b0f1-dbe701537b4c", > "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:49:19,447] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:49:19,447] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/31df93a5-d855-4383-b908-cd27426083a4", > "[2018-06-22 04:49:19,459] (heat-config) [INFO] ", > "[2018-06-22 04:49:19,460] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-22 04:49:19,460] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/31df93a5-d855-4383-b908-cd27426083a4", > "", > "[2018-06-22 04:49:19,463] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:49:19,464] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.json < /var/lib/heat-config/deployed/31df93a5-d855-4383-b908-cd27426083a4.notify.json", > "[2018-06-22 04:49:19,859] (heat-config) [INFO] ", > "[2018-06-22 04:49:19,859] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:49:20,069 p=11115 u=mistral | TASK [Check-mode for Run deployment ControllerHostsDeployment] ***************** >2018-06-22 04:49:20,084 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:20,107 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:49:20,290 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "e6a4509f-19f5-47a5-a7c2-2fdb14e83061"}, "changed": false} >2018-06-22 04:49:20,314 p=11115 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment] ***************** >2018-06-22 04:49:21,078 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "48298294eeb575736391d8b994935830b81ec4cc", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesDeployment-e6a4509f-19f5-47a5-a7c2-2fdb14e83061", "gid": 0, "group": "root", "md5sum": "2df7db59133a0734a7bb5bbe86170da1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19031, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657360.49-78736941964001/source", "state": "file", "uid": 0} >2018-06-22 04:49:21,100 p=11115 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesDeployment] ********** >2018-06-22 04:49:21,428 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:49:21,452 p=11115 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesDeployment] *********** >2018-06-22 04:49:21,470 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:21,493 p=11115 u=mistral | TASK [Remove deployed file for ControllerAllNodesDeployment when previous deployment failed] *** >2018-06-22 04:49:21,509 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:21,531 p=11115 u=mistral | TASK [Force remove deployed file for ControllerAllNodesDeployment] ************* >2018-06-22 04:49:21,547 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:21,571 p=11115 u=mistral | TASK [Run deployment ControllerAllNodesDeployment] ***************************** >2018-06-22 04:49:22,499 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.notify.json)", "delta": "0:00:00.587232", "end": "2018-06-22 04:49:22.497355", "rc": 0, "start": "2018-06-22 04:49:21.910123", "stderr": "[2018-06-22 04:49:21,939] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.json\n[2018-06-22 04:49:22,060] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:49:22,060] (heat-config) [DEBUG] \n[2018-06-22 04:49:22,060] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 04:49:22,060] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.json < /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.notify.json\n[2018-06-22 04:49:22,490] (heat-config) [INFO] \n[2018-06-22 04:49:22,490] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:49:21,939] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.json", "[2018-06-22 04:49:22,060] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:49:22,060] (heat-config) [DEBUG] ", "[2018-06-22 04:49:22,060] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 04:49:22,060] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.json < /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.notify.json", "[2018-06-22 04:49:22,490] (heat-config) [INFO] ", "[2018-06-22 04:49:22,490] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:49:22,523 p=11115 u=mistral | TASK [Output for ControllerAllNodesDeployment] ********************************* >2018-06-22 04:49:22,568 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:49:21,939] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.json", > "[2018-06-22 04:49:22,060] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:49:22,060] (heat-config) [DEBUG] ", > "[2018-06-22 04:49:22,060] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 04:49:22,060] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.json < /var/lib/heat-config/deployed/e6a4509f-19f5-47a5-a7c2-2fdb14e83061.notify.json", > "[2018-06-22 04:49:22,490] (heat-config) [INFO] ", > "[2018-06-22 04:49:22,490] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:49:22,590 p=11115 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesDeployment] ************** >2018-06-22 04:49:22,603 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:22,624 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:49:22,676 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e"}, "changed": false} >2018-06-22 04:49:22,698 p=11115 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment] ******* >2018-06-22 04:49:23,329 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "feb9c3a7d91f93109081fec7cab0b276479ca203", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesValidationDeployment-21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e", "gid": 0, "group": "root", "md5sum": "c038fb2a73d0e3c01963b5c9616479e9", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4941, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657362.75-207351167263283/source", "state": "file", "uid": 0} >2018-06-22 04:49:23,350 p=11115 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesValidationDeployment] *** >2018-06-22 04:49:23,694 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:49:23,718 p=11115 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesValidationDeployment] *** >2018-06-22 04:49:23,738 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:23,761 p=11115 u=mistral | TASK [Remove deployed file for ControllerAllNodesValidationDeployment when previous deployment failed] *** >2018-06-22 04:49:23,778 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:23,802 p=11115 u=mistral | TASK [Force remove deployed file for ControllerAllNodesValidationDeployment] *** >2018-06-22 04:49:23,819 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:23,843 p=11115 u=mistral | TASK [Run deployment ControllerAllNodesValidationDeployment] ******************* >2018-06-22 04:49:25,469 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.notify.json)", "delta": "0:00:01.272795", "end": "2018-06-22 04:49:25.463008", "rc": 0, "start": "2018-06-22 04:49:24.190213", "stderr": "[2018-06-22 04:49:24,217] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.json\n[2018-06-22 04:49:24,992] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.111 for local network 10.0.0.0/24.\\nPing to 10.0.0.111 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.10 for local network 172.17.1.0/24.\\nPing to 172.17.1.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\\nPing to 172.17.3.11 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.19 for local network 172.17.4.0/24.\\nPing to 172.17.4.19 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:49:24,992] (heat-config) [DEBUG] [2018-06-22 04:49:24,239] (heat-config) [INFO] ping_test_ips=172.17.3.11 172.17.4.19 172.17.1.10 172.17.2.12 10.0.0.111 192.168.24.12\n[2018-06-22 04:49:24,240] (heat-config) [INFO] validate_fqdn=False\n[2018-06-22 04:49:24,240] (heat-config) [INFO] validate_ntp=True\n[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1\n[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-x4abpj5kfcwu-0-ikzu4l4gl3pf/fe7bae37-4998-4a5f-af7b-057112fb3369\n[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:49:24,240] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e\n[2018-06-22 04:49:24,988] (heat-config) [INFO] Trying to ping 10.0.0.111 for local network 10.0.0.0/24.\nPing to 10.0.0.111 succeeded.\nSUCCESS\nTrying to ping 172.17.1.10 for local network 172.17.1.0/24.\nPing to 172.17.1.10 succeeded.\nSUCCESS\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\nPing to 172.17.2.12 succeeded.\nSUCCESS\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\nPing to 172.17.3.11 succeeded.\nSUCCESS\nTrying to ping 172.17.4.19 for local network 172.17.4.0/24.\nPing to 172.17.4.19 succeeded.\nSUCCESS\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\nPing to 192.168.24.12 succeeded.\nSUCCESS\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-06-22 04:49:24,988] (heat-config) [DEBUG] \n[2018-06-22 04:49:24,988] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e\n\n[2018-06-22 04:49:24,992] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:49:24,993] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.json < /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.notify.json\n[2018-06-22 04:49:25,455] (heat-config) [INFO] \n[2018-06-22 04:49:25,456] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:49:24,217] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.json", "[2018-06-22 04:49:24,992] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.111 for local network 10.0.0.0/24.\\nPing to 10.0.0.111 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.10 for local network 172.17.1.0/24.\\nPing to 172.17.1.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\\nPing to 172.17.3.11 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.19 for local network 172.17.4.0/24.\\nPing to 172.17.4.19 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:49:24,992] (heat-config) [DEBUG] [2018-06-22 04:49:24,239] (heat-config) [INFO] ping_test_ips=172.17.3.11 172.17.4.19 172.17.1.10 172.17.2.12 10.0.0.111 192.168.24.12", "[2018-06-22 04:49:24,240] (heat-config) [INFO] validate_fqdn=False", "[2018-06-22 04:49:24,240] (heat-config) [INFO] validate_ntp=True", "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-x4abpj5kfcwu-0-ikzu4l4gl3pf/fe7bae37-4998-4a5f-af7b-057112fb3369", "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:49:24,240] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e", "[2018-06-22 04:49:24,988] (heat-config) [INFO] Trying to ping 10.0.0.111 for local network 10.0.0.0/24.", "Ping to 10.0.0.111 succeeded.", "SUCCESS", "Trying to ping 172.17.1.10 for local network 172.17.1.0/24.", "Ping to 172.17.1.10 succeeded.", "SUCCESS", "Trying to ping 172.17.2.12 for local network 172.17.2.0/24.", "Ping to 172.17.2.12 succeeded.", "SUCCESS", "Trying to ping 172.17.3.11 for local network 172.17.3.0/24.", "Ping to 172.17.3.11 succeeded.", "SUCCESS", "Trying to ping 172.17.4.19 for local network 172.17.4.0/24.", "Ping to 172.17.4.19 succeeded.", "SUCCESS", "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", "Ping to 192.168.24.12 succeeded.", "SUCCESS", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-06-22 04:49:24,988] (heat-config) [DEBUG] ", "[2018-06-22 04:49:24,988] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e", "", "[2018-06-22 04:49:24,992] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:49:24,993] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.json < /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.notify.json", "[2018-06-22 04:49:25,455] (heat-config) [INFO] ", "[2018-06-22 04:49:25,456] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:49:25,495 p=11115 u=mistral | TASK [Output for ControllerAllNodesValidationDeployment] *********************** >2018-06-22 04:49:25,546 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:49:24,217] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.json", > "[2018-06-22 04:49:24,992] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.111 for local network 10.0.0.0/24.\\nPing to 10.0.0.111 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.10 for local network 172.17.1.0/24.\\nPing to 172.17.1.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\\nPing to 172.17.3.11 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.19 for local network 172.17.4.0/24.\\nPing to 172.17.4.19 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:49:24,992] (heat-config) [DEBUG] [2018-06-22 04:49:24,239] (heat-config) [INFO] ping_test_ips=172.17.3.11 172.17.4.19 172.17.1.10 172.17.2.12 10.0.0.111 192.168.24.12", > "[2018-06-22 04:49:24,240] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-22 04:49:24,240] (heat-config) [INFO] validate_ntp=True", > "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", > "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-x4abpj5kfcwu-0-ikzu4l4gl3pf/fe7bae37-4998-4a5f-af7b-057112fb3369", > "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:49:24,240] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:49:24,240] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e", > "[2018-06-22 04:49:24,988] (heat-config) [INFO] Trying to ping 10.0.0.111 for local network 10.0.0.0/24.", > "Ping to 10.0.0.111 succeeded.", > "SUCCESS", > "Trying to ping 172.17.1.10 for local network 172.17.1.0/24.", > "Ping to 172.17.1.10 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.12 for local network 172.17.2.0/24.", > "Ping to 172.17.2.12 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.11 for local network 172.17.3.0/24.", > "Ping to 172.17.3.11 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.19 for local network 172.17.4.0/24.", > "Ping to 172.17.4.19 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", > "Ping to 192.168.24.12 succeeded.", > "SUCCESS", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-06-22 04:49:24,988] (heat-config) [DEBUG] ", > "[2018-06-22 04:49:24,988] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e", > "", > "[2018-06-22 04:49:24,992] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:49:24,993] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.json < /var/lib/heat-config/deployed/21c0f718-1e7e-46ee-bd77-dc8ff9de9d0e.notify.json", > "[2018-06-22 04:49:25,455] (heat-config) [INFO] ", > "[2018-06-22 04:49:25,456] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:49:25,570 p=11115 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesValidationDeployment] **** >2018-06-22 04:49:25,584 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:25,607 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:49:25,709 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "cd8e8c5c-b390-46c3-ba5a-ac1430678589"}, "changed": false} >2018-06-22 04:49:25,731 p=11115 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment] ***************** >2018-06-22 04:49:26,417 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "a2899ca50d74ce3bbaa92fbcc0d63553939b945c", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostPrepDeployment-cd8e8c5c-b390-46c3-ba5a-ac1430678589", "gid": 0, "group": "root", "md5sum": "bf72bfa1cfa4c55f43d0e5f21504213d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 45397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657365.83-280052600131596/source", "state": "file", "uid": 0} >2018-06-22 04:49:26,440 p=11115 u=mistral | TASK [Check if deployed file exists for ControllerHostPrepDeployment] ********** >2018-06-22 04:49:26,767 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:49:26,794 p=11115 u=mistral | TASK [Check previous deployment rc for ControllerHostPrepDeployment] *********** >2018-06-22 04:49:26,811 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:26,836 p=11115 u=mistral | TASK [Remove deployed file for ControllerHostPrepDeployment when previous deployment failed] *** >2018-06-22 04:49:26,852 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:26,876 p=11115 u=mistral | TASK [Force remove deployed file for ControllerHostPrepDeployment] ************* >2018-06-22 04:49:26,893 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:26,914 p=11115 u=mistral | TASK [Run deployment ControllerHostPrepDeployment] ***************************** >2018-06-22 04:49:49,855 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.notify.json)", "delta": "0:00:22.582151", "end": "2018-06-22 04:49:49.834994", "rc": 0, "start": "2018-06-22 04:49:27.252843", "stderr": "[2018-06-22 04:49:27,278] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.json\n[2018-06-22 04:49:49,407] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:49:49,407] (heat-config) [DEBUG] [2018-06-22 04:49:27,302] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/cd8e8c5c-b390-46c3-ba5a-ac1430678589_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/cd8e8c5c-b390-46c3-ba5a-ac1430678589_variables.json\n[2018-06-22 04:49:49,402] (heat-config) [INFO] Return code 0\n[2018-06-22 04:49:49,403] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/aodh)\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\n\nTASK [aodh logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/cinder)\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\n\nTASK [cinder logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/cinder)\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\nok: [localhost] => (item=/var/lib/cinder)\n\nTASK [cinder_enable_iscsi_backend fact] ****************************************\nok: [localhost]\n\nTASK [cinder create LVM volume group dd] ***************************************\nskipping: [localhost]\n\nTASK [cinder create LVM volume group] ******************************************\nskipping: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/glance)\n\nTASK [glance logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}\n...ignoring\n\nTASK [set_fact] ****************************************************************\nskipping: [localhost]\n\nTASK [file] ********************************************************************\nskipping: [localhost]\n\nTASK [stat] ********************************************************************\nskipping: [localhost]\n\nTASK [copy] ********************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \n\nTASK [mount] *******************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \n\nTASK [Mount Node Staging Location] *********************************************\nskipping: [localhost]\n\nTASK [Mount NFS on host] *******************************************************\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\n\nTASK [gnocchi logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [get parameters] **********************************************************\nok: [localhost]\n\nTASK [get DeployedSSLCertificatePath attributes] *******************************\nskipping: [localhost]\n\nTASK [Assign bootstrap node] ***************************************************\nskipping: [localhost]\n\nTASK [set is_bootstrap_node fact] **********************************************\nskipping: [localhost]\n\nTASK [get haproxy status] ******************************************************\nskipping: [localhost]\n\nTASK [get pacemaker status] ****************************************************\nskipping: [localhost]\n\nTASK [get docker status] *******************************************************\nskipping: [localhost]\n\nTASK [get container_id] ********************************************************\nskipping: [localhost]\n\nTASK [get pcs resource name for haproxy container] *****************************\nskipping: [localhost]\n\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\nskipping: [localhost]\n\nTASK [push certificate content] ************************************************\nskipping: [localhost]\n\nTASK [set certificate ownership] ***********************************************\nskipping: [localhost]\n\nTASK [reload haproxy if enabled] ***********************************************\nskipping: [localhost]\n\nTASK [restart pacemaker resource for haproxy] **********************************\nskipping: [localhost]\n\nTASK [set kolla_dir fact] ******************************************************\nskipping: [localhost]\n\nTASK [set certificate group on host via container] *****************************\nskipping: [localhost]\n\nTASK [copy certificate from kolla directory to final location] *****************\nskipping: [localhost]\n\nTASK [send restart order to haproxy container] *********************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/haproxy)\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\n\nTASK [heat logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/horizon)\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\n\nTASK [horizon logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/keystone)\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\n\nTASK [keystone logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [memcached logs readme] ***************************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/log/containers/mysql)\nok: [localhost] => (item=/var/lib/mysql)\n\nTASK [mysql logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [create /var/lib/neutron] *************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/panko)\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\n\nTASK [panko logs readme] *******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/rabbitmq)\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\n\nTASK [rabbitmq logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}\n...ignoring\n\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/redis)\nchanged: [localhost] => (item=/var/log/containers/redis)\nok: [localhost] => (item=/var/run/redis)\n\nTASK [redis logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create /var/lib/sahara] **************************************************\nchanged: [localhost]\n\nTASK [create persistent sahara logs directory] *********************************\nchanged: [localhost]\n\nTASK [sahara logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/srv/node)\nchanged: [localhost] => (item=/var/log/swift)\n\nTASK [Create swift logging symlink] ********************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/srv/node)\nok: [localhost] => (item=/var/log/swift)\nok: [localhost] => (item=/var/log/containers)\n\nTASK [Set swift_use_local_disks fact] ******************************************\nok: [localhost]\n\nTASK [Create Swift d1 directory if needed] *************************************\nchanged: [localhost]\n\nTASK [swift logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [Format SwiftRawDisks] ****************************************************\n\nTASK [Mount devices defined in SwiftRawDisks] **********************************\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \n\n\n[2018-06-22 04:49:49,403] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/cd8e8c5c-b390-46c3-ba5a-ac1430678589_playbook.yaml\n\n[2018-06-22 04:49:49,407] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-22 04:49:49,408] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.json < /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.notify.json\n[2018-06-22 04:49:49,828] (heat-config) [INFO] \n[2018-06-22 04:49:49,828] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:49:27,278] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.json", "[2018-06-22 04:49:49,407] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:49:49,407] (heat-config) [DEBUG] [2018-06-22 04:49:27,302] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/cd8e8c5c-b390-46c3-ba5a-ac1430678589_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/cd8e8c5c-b390-46c3-ba5a-ac1430678589_variables.json", "[2018-06-22 04:49:49,402] (heat-config) [INFO] Return code 0", "[2018-06-22 04:49:49,403] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/aodh)", "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", "", "TASK [aodh logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/cinder)", "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", "", "TASK [cinder logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/cinder)", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "ok: [localhost] => (item=/var/lib/cinder)", "", "TASK [cinder_enable_iscsi_backend fact] ****************************************", "ok: [localhost]", "", "TASK [cinder create LVM volume group dd] ***************************************", "skipping: [localhost]", "", "TASK [cinder create LVM volume group] ******************************************", "skipping: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/glance)", "", "TASK [glance logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", "...ignoring", "", "TASK [set_fact] ****************************************************************", "skipping: [localhost]", "", "TASK [file] ********************************************************************", "skipping: [localhost]", "", "TASK [stat] ********************************************************************", "skipping: [localhost]", "", "TASK [copy] ********************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", "", "TASK [mount] *******************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", "", "TASK [Mount Node Staging Location] *********************************************", "skipping: [localhost]", "", "TASK [Mount NFS on host] *******************************************************", "skipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) ", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/gnocchi)", "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", "", "TASK [gnocchi logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [get parameters] **********************************************************", "ok: [localhost]", "", "TASK [get DeployedSSLCertificatePath attributes] *******************************", "skipping: [localhost]", "", "TASK [Assign bootstrap node] ***************************************************", "skipping: [localhost]", "", "TASK [set is_bootstrap_node fact] **********************************************", "skipping: [localhost]", "", "TASK [get haproxy status] ******************************************************", "skipping: [localhost]", "", "TASK [get pacemaker status] ****************************************************", "skipping: [localhost]", "", "TASK [get docker status] *******************************************************", "skipping: [localhost]", "", "TASK [get container_id] ********************************************************", "skipping: [localhost]", "", "TASK [get pcs resource name for haproxy container] *****************************", "skipping: [localhost]", "", "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", "skipping: [localhost]", "", "TASK [push certificate content] ************************************************", "skipping: [localhost]", "", "TASK [set certificate ownership] ***********************************************", "skipping: [localhost]", "", "TASK [reload haproxy if enabled] ***********************************************", "skipping: [localhost]", "", "TASK [restart pacemaker resource for haproxy] **********************************", "skipping: [localhost]", "", "TASK [set kolla_dir fact] ******************************************************", "skipping: [localhost]", "", "TASK [set certificate group on host via container] *****************************", "skipping: [localhost]", "", "TASK [copy certificate from kolla directory to final location] *****************", "skipping: [localhost]", "", "TASK [send restart order to haproxy container] *********************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/haproxy)", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", "", "TASK [heat logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/horizon)", "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", "", "TASK [horizon logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/keystone)", "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", "", "TASK [keystone logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [memcached logs readme] ***************************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/log/containers/mysql)", "ok: [localhost] => (item=/var/lib/mysql)", "", "TASK [mysql logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [create /var/lib/neutron] *************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/panko)", "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", "", "TASK [panko logs readme] *******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/rabbitmq)", "changed: [localhost] => (item=/var/log/containers/rabbitmq)", "", "TASK [rabbitmq logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", "...ignoring", "", "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/redis)", "changed: [localhost] => (item=/var/log/containers/redis)", "ok: [localhost] => (item=/var/run/redis)", "", "TASK [redis logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create /var/lib/sahara] **************************************************", "changed: [localhost]", "", "TASK [create persistent sahara logs directory] *********************************", "changed: [localhost]", "", "TASK [sahara logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/srv/node)", "changed: [localhost] => (item=/var/log/swift)", "", "TASK [Create swift logging symlink] ********************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/srv/node)", "ok: [localhost] => (item=/var/log/swift)", "ok: [localhost] => (item=/var/log/containers)", "", "TASK [Set swift_use_local_disks fact] ******************************************", "ok: [localhost]", "", "TASK [Create Swift d1 directory if needed] *************************************", "changed: [localhost]", "", "TASK [swift logs readme] *******************************************************", "changed: [localhost]", "", "TASK [Format SwiftRawDisks] ****************************************************", "", "TASK [Mount devices defined in SwiftRawDisks] **********************************", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=60 changed=33 unreachable=0 failed=0 ", "", "", "[2018-06-22 04:49:49,403] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/cd8e8c5c-b390-46c3-ba5a-ac1430678589_playbook.yaml", "", "[2018-06-22 04:49:49,407] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-22 04:49:49,408] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.json < /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.notify.json", "[2018-06-22 04:49:49,828] (heat-config) [INFO] ", "[2018-06-22 04:49:49,828] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:49:49,880 p=11115 u=mistral | TASK [Output for ControllerHostPrepDeployment] ********************************* >2018-06-22 04:49:49,993 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:49:27,278] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.json", > "[2018-06-22 04:49:49,407] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:49:49,407] (heat-config) [DEBUG] [2018-06-22 04:49:27,302] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/cd8e8c5c-b390-46c3-ba5a-ac1430678589_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/cd8e8c5c-b390-46c3-ba5a-ac1430678589_variables.json", > "[2018-06-22 04:49:49,402] (heat-config) [INFO] Return code 0", > "[2018-06-22 04:49:49,403] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/aodh)", > "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", > "", > "TASK [aodh logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/cinder)", > "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", > "", > "TASK [cinder logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/cinder)", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "ok: [localhost] => (item=/var/lib/cinder)", > "", > "TASK [cinder_enable_iscsi_backend fact] ****************************************", > "ok: [localhost]", > "", > "TASK [cinder create LVM volume group dd] ***************************************", > "skipping: [localhost]", > "", > "TASK [cinder create LVM volume group] ******************************************", > "skipping: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/glance)", > "", > "TASK [glance logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", > "...ignoring", > "", > "TASK [set_fact] ****************************************************************", > "skipping: [localhost]", > "", > "TASK [file] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [stat] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [copy] ********************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", > "", > "TASK [mount] *******************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", > "", > "TASK [Mount Node Staging Location] *********************************************", > "skipping: [localhost]", > "", > "TASK [Mount NFS on host] *******************************************************", > "skipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) ", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/gnocchi)", > "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", > "", > "TASK [gnocchi logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [get parameters] **********************************************************", > "ok: [localhost]", > "", > "TASK [get DeployedSSLCertificatePath attributes] *******************************", > "skipping: [localhost]", > "", > "TASK [Assign bootstrap node] ***************************************************", > "skipping: [localhost]", > "", > "TASK [set is_bootstrap_node fact] **********************************************", > "skipping: [localhost]", > "", > "TASK [get haproxy status] ******************************************************", > "skipping: [localhost]", > "", > "TASK [get pacemaker status] ****************************************************", > "skipping: [localhost]", > "", > "TASK [get docker status] *******************************************************", > "skipping: [localhost]", > "", > "TASK [get container_id] ********************************************************", > "skipping: [localhost]", > "", > "TASK [get pcs resource name for haproxy container] *****************************", > "skipping: [localhost]", > "", > "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", > "skipping: [localhost]", > "", > "TASK [push certificate content] ************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate ownership] ***********************************************", > "skipping: [localhost]", > "", > "TASK [reload haproxy if enabled] ***********************************************", > "skipping: [localhost]", > "", > "TASK [restart pacemaker resource for haproxy] **********************************", > "skipping: [localhost]", > "", > "TASK [set kolla_dir fact] ******************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate group on host via container] *****************************", > "skipping: [localhost]", > "", > "TASK [copy certificate from kolla directory to final location] *****************", > "skipping: [localhost]", > "", > "TASK [send restart order to haproxy container] *********************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/haproxy)", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", > "", > "TASK [heat logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/horizon)", > "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", > "", > "TASK [horizon logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/keystone)", > "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", > "", > "TASK [keystone logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [memcached logs readme] ***************************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/log/containers/mysql)", > "ok: [localhost] => (item=/var/lib/mysql)", > "", > "TASK [mysql logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [create /var/lib/neutron] *************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/panko)", > "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", > "", > "TASK [panko logs readme] *******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/rabbitmq)", > "changed: [localhost] => (item=/var/log/containers/rabbitmq)", > "", > "TASK [rabbitmq logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", > "...ignoring", > "", > "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/redis)", > "changed: [localhost] => (item=/var/log/containers/redis)", > "ok: [localhost] => (item=/var/run/redis)", > "", > "TASK [redis logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create /var/lib/sahara] **************************************************", > "changed: [localhost]", > "", > "TASK [create persistent sahara logs directory] *********************************", > "changed: [localhost]", > "", > "TASK [sahara logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/srv/node)", > "changed: [localhost] => (item=/var/log/swift)", > "", > "TASK [Create swift logging symlink] ********************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/srv/node)", > "ok: [localhost] => (item=/var/log/swift)", > "ok: [localhost] => (item=/var/log/containers)", > "", > "TASK [Set swift_use_local_disks fact] ******************************************", > "ok: [localhost]", > "", > "TASK [Create Swift d1 directory if needed] *************************************", > "changed: [localhost]", > "", > "TASK [swift logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [Format SwiftRawDisks] ****************************************************", > "", > "TASK [Mount devices defined in SwiftRawDisks] **********************************", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=60 changed=33 unreachable=0 failed=0 ", > "", > "", > "[2018-06-22 04:49:49,403] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/cd8e8c5c-b390-46c3-ba5a-ac1430678589_playbook.yaml", > "", > "[2018-06-22 04:49:49,407] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-22 04:49:49,408] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.json < /var/lib/heat-config/deployed/cd8e8c5c-b390-46c3-ba5a-ac1430678589.notify.json", > "[2018-06-22 04:49:49,828] (heat-config) [INFO] ", > "[2018-06-22 04:49:49,828] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:49:50,021 p=11115 u=mistral | TASK [Check-mode for Run deployment ControllerHostPrepDeployment] ************** >2018-06-22 04:49:50,035 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:50,059 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:49:50,158 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "eb2ec995-5b0b-4c29-9168-5182240a8969"}, "changed": false} >2018-06-22 04:49:50,182 p=11115 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy] ******************** >2018-06-22 04:49:50,855 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "0c84b24b8044d609ca5040674a512196f407d776", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerArtifactsDeploy-eb2ec995-5b0b-4c29-9168-5182240a8969", "gid": 0, "group": "root", "md5sum": "483d516a31c55815e019fb76adae0710", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657390.28-158562957009835/source", "state": "file", "uid": 0} >2018-06-22 04:49:50,880 p=11115 u=mistral | TASK [Check if deployed file exists for ControllerArtifactsDeploy] ************* >2018-06-22 04:49:51,260 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:49:51,286 p=11115 u=mistral | TASK [Check previous deployment rc for ControllerArtifactsDeploy] ************** >2018-06-22 04:49:51,303 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:51,326 p=11115 u=mistral | TASK [Remove deployed file for ControllerArtifactsDeploy when previous deployment failed] *** >2018-06-22 04:49:51,346 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:51,369 p=11115 u=mistral | TASK [Force remove deployed file for ControllerArtifactsDeploy] **************** >2018-06-22 04:49:51,387 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:51,410 p=11115 u=mistral | TASK [Run deployment ControllerArtifactsDeploy] ******************************** >2018-06-22 04:49:52,279 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.notify.json)", "delta": "0:00:00.456365", "end": "2018-06-22 04:49:52.244017", "rc": 0, "start": "2018-06-22 04:49:51.787652", "stderr": "[2018-06-22 04:49:51,810] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.json\n[2018-06-22 04:49:51,840] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:49:51,841] (heat-config) [DEBUG] [2018-06-22 04:49:51,831] (heat-config) [INFO] artifact_urls=\n[2018-06-22 04:49:51,831] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1\n[2018-06-22 04:49:51,831] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:49:51,832] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-lyl23itvojuz-ControllerArtifactsDeploy-xa7fd5xfponc-0-zaa72jxd5w2e/e892d6f9-4488-4791-a94b-834908ebb31b\n[2018-06-22 04:49:51,832] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:49:51,832] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:49:51,832] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/eb2ec995-5b0b-4c29-9168-5182240a8969\n[2018-06-22 04:49:51,837] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-22 04:49:51,837] (heat-config) [DEBUG] \n[2018-06-22 04:49:51,837] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/eb2ec995-5b0b-4c29-9168-5182240a8969\n\n[2018-06-22 04:49:51,841] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:49:51,841] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.json < /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.notify.json\n[2018-06-22 04:49:52,237] (heat-config) [INFO] \n[2018-06-22 04:49:52,237] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:49:51,810] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.json", "[2018-06-22 04:49:51,840] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:49:51,841] (heat-config) [DEBUG] [2018-06-22 04:49:51,831] (heat-config) [INFO] artifact_urls=", "[2018-06-22 04:49:51,831] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", "[2018-06-22 04:49:51,831] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:49:51,832] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-lyl23itvojuz-ControllerArtifactsDeploy-xa7fd5xfponc-0-zaa72jxd5w2e/e892d6f9-4488-4791-a94b-834908ebb31b", "[2018-06-22 04:49:51,832] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:49:51,832] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:49:51,832] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/eb2ec995-5b0b-4c29-9168-5182240a8969", "[2018-06-22 04:49:51,837] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-22 04:49:51,837] (heat-config) [DEBUG] ", "[2018-06-22 04:49:51,837] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/eb2ec995-5b0b-4c29-9168-5182240a8969", "", "[2018-06-22 04:49:51,841] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:49:51,841] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.json < /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.notify.json", "[2018-06-22 04:49:52,237] (heat-config) [INFO] ", "[2018-06-22 04:49:52,237] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:49:52,302 p=11115 u=mistral | TASK [Output for ControllerArtifactsDeploy] ************************************ >2018-06-22 04:49:52,350 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:49:51,810] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.json", > "[2018-06-22 04:49:51,840] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:49:51,841] (heat-config) [DEBUG] [2018-06-22 04:49:51,831] (heat-config) [INFO] artifact_urls=", > "[2018-06-22 04:49:51,831] (heat-config) [INFO] deploy_server_id=c1fa7088-58e0-4167-924a-7460143754f1", > "[2018-06-22 04:49:51,831] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:49:51,832] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-lyl23itvojuz-ControllerArtifactsDeploy-xa7fd5xfponc-0-zaa72jxd5w2e/e892d6f9-4488-4791-a94b-834908ebb31b", > "[2018-06-22 04:49:51,832] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:49:51,832] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:49:51,832] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/eb2ec995-5b0b-4c29-9168-5182240a8969", > "[2018-06-22 04:49:51,837] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-22 04:49:51,837] (heat-config) [DEBUG] ", > "[2018-06-22 04:49:51,837] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/eb2ec995-5b0b-4c29-9168-5182240a8969", > "", > "[2018-06-22 04:49:51,841] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:49:51,841] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.json < /var/lib/heat-config/deployed/eb2ec995-5b0b-4c29-9168-5182240a8969.notify.json", > "[2018-06-22 04:49:52,237] (heat-config) [INFO] ", > "[2018-06-22 04:49:52,237] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:49:52,373 p=11115 u=mistral | TASK [Check-mode for Run deployment ControllerArtifactsDeploy] ***************** >2018-06-22 04:49:52,386 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:52,407 p=11115 u=mistral | TASK [include] ***************************************************************** >2018-06-22 04:49:52,609 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Compute/deployments.yaml for compute-0 >2018-06-22 04:49:52,617 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Compute/deployments.yaml for compute-0 >2018-06-22 04:49:52,625 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Compute/deployments.yaml for compute-0 >2018-06-22 04:49:52,632 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Compute/deployments.yaml for compute-0 >2018-06-22 04:49:52,640 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Compute/deployments.yaml for compute-0 >2018-06-22 04:49:52,648 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Compute/deployments.yaml for compute-0 >2018-06-22 04:49:52,655 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Compute/deployments.yaml for compute-0 >2018-06-22 04:49:52,663 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/Compute/deployments.yaml for compute-0 >2018-06-22 04:49:52,701 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:49:52,765 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "809d11ee-8772-4ffa-afde-0dbb5b84abb6"}, "changed": false} >2018-06-22 04:49:52,783 p=11115 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-22 04:49:53,420 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "22f02c7d18e2ae7a6fc090ec4e72dc803f218013", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-809d11ee-8772-4ffa-afde-0dbb5b84abb6", "gid": 0, "group": "root", "md5sum": "867df7ba1917bd61f8e92d2400783f9d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9259, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657392.85-231036980531544/source", "state": "file", "uid": 0} >2018-06-22 04:49:53,440 p=11115 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-22 04:49:53,768 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:49:53,787 p=11115 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-22 04:49:53,803 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:53,821 p=11115 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-22 04:49:53,838 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:53,856 p=11115 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-22 04:49:53,871 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:49:53,890 p=11115 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-22 04:50:13,908 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.notify.json)", "delta": "0:00:19.673111", "end": "2018-06-22 04:50:13.897005", "rc": 0, "start": "2018-06-22 04:49:54.223894", "stderr": "[2018-06-22 04:49:54,248] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.json\n[2018-06-22 04:50:13,488] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 04:49:54 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 04:49:54 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 04:49:54 AM] [INFO] Not using any mapping file.\\n[2018/06/22 04:49:54 AM] [INFO] Finding active nics\\n[2018/06/22 04:49:54 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 04:49:54 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 04:49:54 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 04:49:54 AM] [INFO] lo is not an active nic\\n[2018/06/22 04:49:54 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 04:49:54 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 04:49:54 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 04:49:54 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 04:49:54 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth0\\n[2018/06/22 04:49:54 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 04:49:54 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth1\\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth2\\n[2018/06/22 04:49:54 AM] [INFO] applying network configs...\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 04:50:00 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:50:04 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:08 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:13 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:13,488] (heat-config) [DEBUG] [2018-06-22 04:49:54,271] (heat-config) [INFO] interface_name=nic1\n[2018-06-22 04:49:54,271] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9\n[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-pkczk7wkj4ny-0-5yammmgid6qr-NetworkDeployment-mawtpgoyduf7-TripleOSoftwareDeployment-vaegjpwaztpg/1a121906-f58e-4e19-8cc5-4d4d4f95ca52\n[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:49:54,271] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/809d11ee-8772-4ffa-afde-0dbb5b84abb6\n[2018-06-22 04:50:13,484] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-22 04:50:13,484] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/22 04:49:54 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/22 04:49:54 AM] [INFO] Ifcfg net config provider created.\n[2018/06/22 04:49:54 AM] [INFO] Not using any mapping file.\n[2018/06/22 04:49:54 AM] [INFO] Finding active nics\n[2018/06/22 04:49:54 AM] [INFO] eth1 is an embedded active nic\n[2018/06/22 04:49:54 AM] [INFO] eth0 is an embedded active nic\n[2018/06/22 04:49:54 AM] [INFO] eth2 is an embedded active nic\n[2018/06/22 04:49:54 AM] [INFO] lo is not an active nic\n[2018/06/22 04:49:54 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/22 04:49:54 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/22 04:49:54 AM] [INFO] nic3 mapped to: eth2\n[2018/06/22 04:49:54 AM] [INFO] nic2 mapped to: eth1\n[2018/06/22 04:49:54 AM] [INFO] nic1 mapped to: eth0\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth0\n[2018/06/22 04:49:54 AM] [INFO] adding custom route for interface: eth0\n[2018/06/22 04:49:54 AM] [INFO] adding bridge: br-isolated\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth1\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan20\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan30\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan50\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth2\n[2018/06/22 04:49:54 AM] [INFO] applying network configs...\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: eth2\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth1\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth0\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/22 04:49:55 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth2\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth1\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth0\n[2018/06/22 04:50:00 AM] [INFO] running ifup on interface: vlan20\n[2018/06/22 04:50:04 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 04:50:08 AM] [INFO] running ifup on interface: vlan50\n[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan20\n[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 04:50:13 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-22 04:50:13,484] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/809d11ee-8772-4ffa-afde-0dbb5b84abb6\n\n[2018-06-22 04:50:13,488] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:50:13,489] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.json < /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.notify.json\n[2018-06-22 04:50:13,889] (heat-config) [INFO] \n[2018-06-22 04:50:13,889] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:49:54,248] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.json", "[2018-06-22 04:50:13,488] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 04:49:54 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 04:49:54 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 04:49:54 AM] [INFO] Not using any mapping file.\\n[2018/06/22 04:49:54 AM] [INFO] Finding active nics\\n[2018/06/22 04:49:54 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 04:49:54 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 04:49:54 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 04:49:54 AM] [INFO] lo is not an active nic\\n[2018/06/22 04:49:54 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 04:49:54 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 04:49:54 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 04:49:54 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 04:49:54 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth0\\n[2018/06/22 04:49:54 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 04:49:54 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth1\\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth2\\n[2018/06/22 04:49:54 AM] [INFO] applying network configs...\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 04:50:00 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:50:04 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:08 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:13 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:13,488] (heat-config) [DEBUG] [2018-06-22 04:49:54,271] (heat-config) [INFO] interface_name=nic1", "[2018-06-22 04:49:54,271] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-pkczk7wkj4ny-0-5yammmgid6qr-NetworkDeployment-mawtpgoyduf7-TripleOSoftwareDeployment-vaegjpwaztpg/1a121906-f58e-4e19-8cc5-4d4d4f95ca52", "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:49:54,271] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/809d11ee-8772-4ffa-afde-0dbb5b84abb6", "[2018-06-22 04:50:13,484] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-22 04:50:13,484] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/22 04:49:54 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/22 04:49:54 AM] [INFO] Ifcfg net config provider created.", "[2018/06/22 04:49:54 AM] [INFO] Not using any mapping file.", "[2018/06/22 04:49:54 AM] [INFO] Finding active nics", "[2018/06/22 04:49:54 AM] [INFO] eth1 is an embedded active nic", "[2018/06/22 04:49:54 AM] [INFO] eth0 is an embedded active nic", "[2018/06/22 04:49:54 AM] [INFO] eth2 is an embedded active nic", "[2018/06/22 04:49:54 AM] [INFO] lo is not an active nic", "[2018/06/22 04:49:54 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/22 04:49:54 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/22 04:49:54 AM] [INFO] nic3 mapped to: eth2", "[2018/06/22 04:49:54 AM] [INFO] nic2 mapped to: eth1", "[2018/06/22 04:49:54 AM] [INFO] nic1 mapped to: eth0", "[2018/06/22 04:49:54 AM] [INFO] adding interface: eth0", "[2018/06/22 04:49:54 AM] [INFO] adding custom route for interface: eth0", "[2018/06/22 04:49:54 AM] [INFO] adding bridge: br-isolated", "[2018/06/22 04:49:54 AM] [INFO] adding interface: eth1", "[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan20", "[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan30", "[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan50", "[2018/06/22 04:49:54 AM] [INFO] adding interface: eth2", "[2018/06/22 04:49:54 AM] [INFO] applying network configs...", "[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: eth2", "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth1", "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth0", "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/22 04:49:55 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/22 04:49:55 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth2", "[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth1", "[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth0", "[2018/06/22 04:50:00 AM] [INFO] running ifup on interface: vlan20", "[2018/06/22 04:50:04 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 04:50:08 AM] [INFO] running ifup on interface: vlan50", "[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan20", "[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 04:50:13 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-22 04:50:13,484] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/809d11ee-8772-4ffa-afde-0dbb5b84abb6", "", "[2018-06-22 04:50:13,488] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:50:13,489] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.json < /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.notify.json", "[2018-06-22 04:50:13,889] (heat-config) [INFO] ", "[2018-06-22 04:50:13,889] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:13,928 p=11115 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-22 04:50:13,983 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:49:54,248] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.json", > "[2018-06-22 04:50:13,488] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.13/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 04:49:54 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 04:49:54 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 04:49:54 AM] [INFO] Not using any mapping file.\\n[2018/06/22 04:49:54 AM] [INFO] Finding active nics\\n[2018/06/22 04:49:54 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 04:49:54 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 04:49:54 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 04:49:54 AM] [INFO] lo is not an active nic\\n[2018/06/22 04:49:54 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 04:49:54 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 04:49:54 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 04:49:54 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 04:49:54 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth0\\n[2018/06/22 04:49:54 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 04:49:54 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth1\\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 04:49:54 AM] [INFO] adding interface: eth2\\n[2018/06/22 04:49:54 AM] [INFO] applying network configs...\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 04:49:55 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 04:50:00 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:50:04 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:08 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:13 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:13,488] (heat-config) [DEBUG] [2018-06-22 04:49:54,271] (heat-config) [INFO] interface_name=nic1", > "[2018-06-22 04:49:54,271] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", > "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-pkczk7wkj4ny-0-5yammmgid6qr-NetworkDeployment-mawtpgoyduf7-TripleOSoftwareDeployment-vaegjpwaztpg/1a121906-f58e-4e19-8cc5-4d4d4f95ca52", > "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:49:54,271] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:49:54,271] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/809d11ee-8772-4ffa-afde-0dbb5b84abb6", > "[2018-06-22 04:50:13,484] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-22 04:50:13,484] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.13/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/22 04:49:54 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/22 04:49:54 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/22 04:49:54 AM] [INFO] Not using any mapping file.", > "[2018/06/22 04:49:54 AM] [INFO] Finding active nics", > "[2018/06/22 04:49:54 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/22 04:49:54 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/22 04:49:54 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/22 04:49:54 AM] [INFO] lo is not an active nic", > "[2018/06/22 04:49:54 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/22 04:49:54 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/22 04:49:54 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/22 04:49:54 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/22 04:49:54 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/22 04:49:54 AM] [INFO] adding interface: eth0", > "[2018/06/22 04:49:54 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/22 04:49:54 AM] [INFO] adding bridge: br-isolated", > "[2018/06/22 04:49:54 AM] [INFO] adding interface: eth1", > "[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan20", > "[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan30", > "[2018/06/22 04:49:54 AM] [INFO] adding vlan: vlan50", > "[2018/06/22 04:49:54 AM] [INFO] adding interface: eth2", > "[2018/06/22 04:49:54 AM] [INFO] applying network configs...", > "[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/22 04:49:54 AM] [INFO] running ifdown on interface: eth2", > "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 04:49:55 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/22 04:49:55 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/22 04:49:55 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/22 04:49:55 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth2", > "[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth1", > "[2018/06/22 04:49:55 AM] [INFO] running ifup on interface: eth0", > "[2018/06/22 04:50:00 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/22 04:50:04 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 04:50:08 AM] [INFO] running ifup on interface: vlan50", > "[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/22 04:50:12 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 04:50:13 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-22 04:50:13,484] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/809d11ee-8772-4ffa-afde-0dbb5b84abb6", > "", > "[2018-06-22 04:50:13,488] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:50:13,489] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.json < /var/lib/heat-config/deployed/809d11ee-8772-4ffa-afde-0dbb5b84abb6.notify.json", > "[2018-06-22 04:50:13,889] (heat-config) [INFO] ", > "[2018-06-22 04:50:13,889] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:14,005 p=11115 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-22 04:50:14,020 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:14,039 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:14,142 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "327be9d4-279d-4d43-97bb-ce7470e779ec"}, "changed": false} >2018-06-22 04:50:14,161 p=11115 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment] ************* >2018-06-22 04:50:14,832 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "5cd3290f9ce6099afb6c2d2ce0a90a94f5c7429f", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeUpgradeInitDeployment-327be9d4-279d-4d43-97bb-ce7470e779ec", "gid": 0, "group": "root", "md5sum": "1e7c4576e2ff4e4a474bb70bd877385c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1182, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657414.26-137446592239984/source", "state": "file", "uid": 0} >2018-06-22 04:50:14,852 p=11115 u=mistral | TASK [Check if deployed file exists for NovaComputeUpgradeInitDeployment] ****** >2018-06-22 04:50:15,231 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:15,250 p=11115 u=mistral | TASK [Check previous deployment rc for NovaComputeUpgradeInitDeployment] ******* >2018-06-22 04:50:15,266 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:15,284 p=11115 u=mistral | TASK [Remove deployed file for NovaComputeUpgradeInitDeployment when previous deployment failed] *** >2018-06-22 04:50:15,303 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:15,357 p=11115 u=mistral | TASK [Force remove deployed file for NovaComputeUpgradeInitDeployment] ********* >2018-06-22 04:50:15,374 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:15,394 p=11115 u=mistral | TASK [Run deployment NovaComputeUpgradeInitDeployment] ************************* >2018-06-22 04:50:16,207 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.notify.json)", "delta": "0:00:00.479904", "end": "2018-06-22 04:50:16.206617", "rc": 0, "start": "2018-06-22 04:50:15.726713", "stderr": "[2018-06-22 04:50:15,751] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.json\n[2018-06-22 04:50:15,781] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:15,781] (heat-config) [DEBUG] [2018-06-22 04:50:15,772] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9\n[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-pkczk7wkj4ny-0-5yammmgid6qr-NovaComputeUpgradeInitDeployment-65bnnm3lxqrb/acee393a-4198-458e-998a-8d9e37e34112\n[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:50:15,773] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/327be9d4-279d-4d43-97bb-ce7470e779ec\n[2018-06-22 04:50:15,777] (heat-config) [INFO] \n[2018-06-22 04:50:15,778] (heat-config) [DEBUG] \n[2018-06-22 04:50:15,778] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/327be9d4-279d-4d43-97bb-ce7470e779ec\n\n[2018-06-22 04:50:15,781] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:50:15,781] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.json < /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.notify.json\n[2018-06-22 04:50:16,200] (heat-config) [INFO] \n[2018-06-22 04:50:16,200] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:15,751] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.json", "[2018-06-22 04:50:15,781] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:15,781] (heat-config) [DEBUG] [2018-06-22 04:50:15,772] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", "[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-pkczk7wkj4ny-0-5yammmgid6qr-NovaComputeUpgradeInitDeployment-65bnnm3lxqrb/acee393a-4198-458e-998a-8d9e37e34112", "[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:50:15,773] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/327be9d4-279d-4d43-97bb-ce7470e779ec", "[2018-06-22 04:50:15,777] (heat-config) [INFO] ", "[2018-06-22 04:50:15,778] (heat-config) [DEBUG] ", "[2018-06-22 04:50:15,778] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/327be9d4-279d-4d43-97bb-ce7470e779ec", "", "[2018-06-22 04:50:15,781] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:50:15,781] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.json < /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.notify.json", "[2018-06-22 04:50:16,200] (heat-config) [INFO] ", "[2018-06-22 04:50:16,200] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:16,226 p=11115 u=mistral | TASK [Output for NovaComputeUpgradeInitDeployment] ***************************** >2018-06-22 04:50:16,277 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:15,751] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.json", > "[2018-06-22 04:50:15,781] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:15,781] (heat-config) [DEBUG] [2018-06-22 04:50:15,772] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", > "[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-pkczk7wkj4ny-0-5yammmgid6qr-NovaComputeUpgradeInitDeployment-65bnnm3lxqrb/acee393a-4198-458e-998a-8d9e37e34112", > "[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:50:15,773] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:50:15,773] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/327be9d4-279d-4d43-97bb-ce7470e779ec", > "[2018-06-22 04:50:15,777] (heat-config) [INFO] ", > "[2018-06-22 04:50:15,778] (heat-config) [DEBUG] ", > "[2018-06-22 04:50:15,778] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/327be9d4-279d-4d43-97bb-ce7470e779ec", > "", > "[2018-06-22 04:50:15,781] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:50:15,781] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.json < /var/lib/heat-config/deployed/327be9d4-279d-4d43-97bb-ce7470e779ec.notify.json", > "[2018-06-22 04:50:16,200] (heat-config) [INFO] ", > "[2018-06-22 04:50:16,200] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:16,296 p=11115 u=mistral | TASK [Check-mode for Run deployment NovaComputeUpgradeInitDeployment] ********** >2018-06-22 04:50:16,310 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:16,329 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:16,461 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "c1263c51-85e0-49cd-b628-2ca0ccf98415"}, "changed": false} >2018-06-22 04:50:16,479 p=11115 u=mistral | TASK [Render deployment file for NovaComputeDeployment] ************************ >2018-06-22 04:50:17,175 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "fb2cabfc23430304e1b2b1f48445c5c3d4bf4948", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeDeployment-c1263c51-85e0-49cd-b628-2ca0ccf98415", "gid": 0, "group": "root", "md5sum": "dd33e7120114dc8024f48ad18edb6c4e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21871, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657416.61-222777733634719/source", "state": "file", "uid": 0} >2018-06-22 04:50:17,193 p=11115 u=mistral | TASK [Check if deployed file exists for NovaComputeDeployment] ***************** >2018-06-22 04:50:17,522 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:17,543 p=11115 u=mistral | TASK [Check previous deployment rc for NovaComputeDeployment] ****************** >2018-06-22 04:50:17,560 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:17,579 p=11115 u=mistral | TASK [Remove deployed file for NovaComputeDeployment when previous deployment failed] *** >2018-06-22 04:50:17,595 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:17,615 p=11115 u=mistral | TASK [Force remove deployed file for NovaComputeDeployment] ******************** >2018-06-22 04:50:17,630 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:17,649 p=11115 u=mistral | TASK [Run deployment NovaComputeDeployment] ************************************ >2018-06-22 04:50:18,543 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.notify.json)", "delta": "0:00:00.560392", "end": "2018-06-22 04:50:18.545410", "rc": 0, "start": "2018-06-22 04:50:17.985018", "stderr": "[2018-06-22 04:50:18,012] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.json\n[2018-06-22 04:50:18,133] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:18,133] (heat-config) [DEBUG] \n[2018-06-22 04:50:18,133] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 04:50:18,133] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.json < /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.notify.json\n[2018-06-22 04:50:18,538] (heat-config) [INFO] \n[2018-06-22 04:50:18,539] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:18,012] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.json", "[2018-06-22 04:50:18,133] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:18,133] (heat-config) [DEBUG] ", "[2018-06-22 04:50:18,133] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 04:50:18,133] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.json < /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.notify.json", "[2018-06-22 04:50:18,538] (heat-config) [INFO] ", "[2018-06-22 04:50:18,539] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:18,562 p=11115 u=mistral | TASK [Output for NovaComputeDeployment] **************************************** >2018-06-22 04:50:18,610 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:18,012] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.json", > "[2018-06-22 04:50:18,133] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:18,133] (heat-config) [DEBUG] ", > "[2018-06-22 04:50:18,133] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 04:50:18,133] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.json < /var/lib/heat-config/deployed/c1263c51-85e0-49cd-b628-2ca0ccf98415.notify.json", > "[2018-06-22 04:50:18,538] (heat-config) [INFO] ", > "[2018-06-22 04:50:18,539] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:18,629 p=11115 u=mistral | TASK [Check-mode for Run deployment NovaComputeDeployment] ********************* >2018-06-22 04:50:18,642 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:18,660 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:18,712 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5"}, "changed": false} >2018-06-22 04:50:18,732 p=11115 u=mistral | TASK [Render deployment file for ComputeHostsDeployment] *********************** >2018-06-22 04:50:19,317 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "1b56f4863b969e423baa1b52eb9d9f0c9f5dbbca", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostsDeployment-5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5", "gid": 0, "group": "root", "md5sum": "79d91db8db9229e6d88ba57b6d6b4c7e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4080, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657418.78-2178162037365/source", "state": "file", "uid": 0} >2018-06-22 04:50:19,337 p=11115 u=mistral | TASK [Check if deployed file exists for ComputeHostsDeployment] **************** >2018-06-22 04:50:19,683 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:19,704 p=11115 u=mistral | TASK [Check previous deployment rc for ComputeHostsDeployment] ***************** >2018-06-22 04:50:19,723 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:19,743 p=11115 u=mistral | TASK [Remove deployed file for ComputeHostsDeployment when previous deployment failed] *** >2018-06-22 04:50:19,759 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:19,779 p=11115 u=mistral | TASK [Force remove deployed file for ComputeHostsDeployment] ******************* >2018-06-22 04:50:19,797 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:19,816 p=11115 u=mistral | TASK [Run deployment ComputeHostsDeployment] *********************************** >2018-06-22 04:50:20,652 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.notify.json)", "delta": "0:00:00.461226", "end": "2018-06-22 04:50:20.621347", "rc": 0, "start": "2018-06-22 04:50:20.160121", "stderr": "[2018-06-22 04:50:20,186] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.json\n[2018-06-22 04:50:20,225] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:20,225] (heat-config) [DEBUG] [2018-06-22 04:50:20,208] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9\n[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-7cfcd65zi35o-0-hzff2hjzljyo/2a82888c-6723-4188-a0f7-81d0421079b4\n[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:50:20,209] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5\n[2018-06-22 04:50:20,221] (heat-config) [INFO] \n[2018-06-22 04:50:20,221] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-22 04:50:20,221] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5\n\n[2018-06-22 04:50:20,225] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:50:20,226] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.json < /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.notify.json\n[2018-06-22 04:50:20,615] (heat-config) [INFO] \n[2018-06-22 04:50:20,615] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:20,186] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.json", "[2018-06-22 04:50:20,225] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:20,225] (heat-config) [DEBUG] [2018-06-22 04:50:20,208] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-7cfcd65zi35o-0-hzff2hjzljyo/2a82888c-6723-4188-a0f7-81d0421079b4", "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:50:20,209] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5", "[2018-06-22 04:50:20,221] (heat-config) [INFO] ", "[2018-06-22 04:50:20,221] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-22 04:50:20,221] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5", "", "[2018-06-22 04:50:20,225] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:50:20,226] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.json < /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.notify.json", "[2018-06-22 04:50:20,615] (heat-config) [INFO] ", "[2018-06-22 04:50:20,615] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:20,678 p=11115 u=mistral | TASK [Output for ComputeHostsDeployment] *************************************** >2018-06-22 04:50:20,782 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:20,186] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.json", > "[2018-06-22 04:50:20,225] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:20,225] (heat-config) [DEBUG] [2018-06-22 04:50:20,208] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", > "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-7cfcd65zi35o-0-hzff2hjzljyo/2a82888c-6723-4188-a0f7-81d0421079b4", > "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:50:20,209] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:50:20,209] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5", > "[2018-06-22 04:50:20,221] (heat-config) [INFO] ", > "[2018-06-22 04:50:20,221] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-22 04:50:20,221] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5", > "", > "[2018-06-22 04:50:20,225] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:50:20,226] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.json < /var/lib/heat-config/deployed/5a51ec44-59ad-41e8-95f6-1dab6d2ba1b5.notify.json", > "[2018-06-22 04:50:20,615] (heat-config) [INFO] ", > "[2018-06-22 04:50:20,615] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:20,811 p=11115 u=mistral | TASK [Check-mode for Run deployment ComputeHostsDeployment] ******************** >2018-06-22 04:50:20,825 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:20,845 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:20,992 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "eba0f427-cab4-4cae-8496-dbcb3c40d9e4"}, "changed": false} >2018-06-22 04:50:21,010 p=11115 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment] ******************** >2018-06-22 04:50:21,718 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "56f33d8fc86518f3075bc1428487156b25bd6246", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesDeployment-eba0f427-cab4-4cae-8496-dbcb3c40d9e4", "gid": 0, "group": "root", "md5sum": "6d8274cfca92b903ce0f24ccf537800a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19019, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657421.16-91233968513877/source", "state": "file", "uid": 0} >2018-06-22 04:50:21,737 p=11115 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesDeployment] ************* >2018-06-22 04:50:22,069 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:22,090 p=11115 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesDeployment] ************** >2018-06-22 04:50:22,110 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:22,131 p=11115 u=mistral | TASK [Remove deployed file for ComputeAllNodesDeployment when previous deployment failed] *** >2018-06-22 04:50:22,149 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:22,168 p=11115 u=mistral | TASK [Force remove deployed file for ComputeAllNodesDeployment] **************** >2018-06-22 04:50:22,186 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:22,205 p=11115 u=mistral | TASK [Run deployment ComputeAllNodesDeployment] ******************************** >2018-06-22 04:50:23,098 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.notify.json)", "delta": "0:00:00.548185", "end": "2018-06-22 04:50:23.098038", "rc": 0, "start": "2018-06-22 04:50:22.549853", "stderr": "[2018-06-22 04:50:22,577] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.json\n[2018-06-22 04:50:22,691] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:22,691] (heat-config) [DEBUG] \n[2018-06-22 04:50:22,691] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 04:50:22,692] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.json < /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.notify.json\n[2018-06-22 04:50:23,092] (heat-config) [INFO] \n[2018-06-22 04:50:23,092] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:22,577] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.json", "[2018-06-22 04:50:22,691] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:22,691] (heat-config) [DEBUG] ", "[2018-06-22 04:50:22,691] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 04:50:22,692] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.json < /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.notify.json", "[2018-06-22 04:50:23,092] (heat-config) [INFO] ", "[2018-06-22 04:50:23,092] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:23,117 p=11115 u=mistral | TASK [Output for ComputeAllNodesDeployment] ************************************ >2018-06-22 04:50:23,164 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:22,577] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.json", > "[2018-06-22 04:50:22,691] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:22,691] (heat-config) [DEBUG] ", > "[2018-06-22 04:50:22,691] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 04:50:22,692] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.json < /var/lib/heat-config/deployed/eba0f427-cab4-4cae-8496-dbcb3c40d9e4.notify.json", > "[2018-06-22 04:50:23,092] (heat-config) [INFO] ", > "[2018-06-22 04:50:23,092] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:23,185 p=11115 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesDeployment] ***************** >2018-06-22 04:50:23,198 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:23,216 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:23,276 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "7356ef3e-6a19-4398-884e-a7c32afea4cc"}, "changed": false} >2018-06-22 04:50:23,295 p=11115 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment] ********** >2018-06-22 04:50:23,915 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "2ea6b9188d918e21f9a1474e6afe7393020795c2", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesValidationDeployment-7356ef3e-6a19-4398-884e-a7c32afea4cc", "gid": 0, "group": "root", "md5sum": "39e4fbe125185341e517fa221101f568", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4935, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657423.35-233258224413254/source", "state": "file", "uid": 0} >2018-06-22 04:50:23,936 p=11115 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesValidationDeployment] *** >2018-06-22 04:50:24,257 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:24,277 p=11115 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesValidationDeployment] **** >2018-06-22 04:50:24,294 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:24,313 p=11115 u=mistral | TASK [Remove deployed file for ComputeAllNodesValidationDeployment when previous deployment failed] *** >2018-06-22 04:50:24,329 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:24,349 p=11115 u=mistral | TASK [Force remove deployed file for ComputeAllNodesValidationDeployment] ****** >2018-06-22 04:50:24,367 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:24,386 p=11115 u=mistral | TASK [Run deployment ComputeAllNodesValidationDeployment] ********************** >2018-06-22 04:50:25,627 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.notify.json)", "delta": "0:00:00.921675", "end": "2018-06-22 04:50:25.630330", "rc": 0, "start": "2018-06-22 04:50:24.708655", "stderr": "[2018-06-22 04:50:24,730] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.json\n[2018-06-22 04:50:25,245] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.10 for local network 172.17.1.0/24.\\nPing to 172.17.1.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\\nPing to 172.17.3.11 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:25,245] (heat-config) [DEBUG] [2018-06-22 04:50:24,750] (heat-config) [INFO] ping_test_ips=172.17.3.11 172.17.4.19 172.17.1.10 172.17.2.12 10.0.0.111 192.168.24.12\n[2018-06-22 04:50:24,750] (heat-config) [INFO] validate_fqdn=False\n[2018-06-22 04:50:24,750] (heat-config) [INFO] validate_ntp=True\n[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9\n[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-oqwugjymippq-0-2oftrr7exndm/f0241a3c-f8a1-481d-9fff-4aad0d842071\n[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:50:24,750] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7356ef3e-6a19-4398-884e-a7c32afea4cc\n[2018-06-22 04:50:25,241] (heat-config) [INFO] Trying to ping 172.17.1.10 for local network 172.17.1.0/24.\nPing to 172.17.1.10 succeeded.\nSUCCESS\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\nPing to 172.17.2.12 succeeded.\nSUCCESS\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\nPing to 172.17.3.11 succeeded.\nSUCCESS\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\nPing to 192.168.24.12 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nSUCCESS\n\n[2018-06-22 04:50:25,241] (heat-config) [DEBUG] \n[2018-06-22 04:50:25,241] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7356ef3e-6a19-4398-884e-a7c32afea4cc\n\n[2018-06-22 04:50:25,245] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:50:25,246] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.json < /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.notify.json\n[2018-06-22 04:50:25,624] (heat-config) [INFO] \n[2018-06-22 04:50:25,625] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:24,730] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.json", "[2018-06-22 04:50:25,245] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.10 for local network 172.17.1.0/24.\\nPing to 172.17.1.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\\nPing to 172.17.3.11 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:25,245] (heat-config) [DEBUG] [2018-06-22 04:50:24,750] (heat-config) [INFO] ping_test_ips=172.17.3.11 172.17.4.19 172.17.1.10 172.17.2.12 10.0.0.111 192.168.24.12", "[2018-06-22 04:50:24,750] (heat-config) [INFO] validate_fqdn=False", "[2018-06-22 04:50:24,750] (heat-config) [INFO] validate_ntp=True", "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-oqwugjymippq-0-2oftrr7exndm/f0241a3c-f8a1-481d-9fff-4aad0d842071", "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:50:24,750] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7356ef3e-6a19-4398-884e-a7c32afea4cc", "[2018-06-22 04:50:25,241] (heat-config) [INFO] Trying to ping 172.17.1.10 for local network 172.17.1.0/24.", "Ping to 172.17.1.10 succeeded.", "SUCCESS", "Trying to ping 172.17.2.12 for local network 172.17.2.0/24.", "Ping to 172.17.2.12 succeeded.", "SUCCESS", "Trying to ping 172.17.3.11 for local network 172.17.3.0/24.", "Ping to 172.17.3.11 succeeded.", "SUCCESS", "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", "Ping to 192.168.24.12 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "SUCCESS", "", "[2018-06-22 04:50:25,241] (heat-config) [DEBUG] ", "[2018-06-22 04:50:25,241] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7356ef3e-6a19-4398-884e-a7c32afea4cc", "", "[2018-06-22 04:50:25,245] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:50:25,246] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.json < /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.notify.json", "[2018-06-22 04:50:25,624] (heat-config) [INFO] ", "[2018-06-22 04:50:25,625] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:25,647 p=11115 u=mistral | TASK [Output for ComputeAllNodesValidationDeployment] ************************** >2018-06-22 04:50:25,700 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:24,730] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.json", > "[2018-06-22 04:50:25,245] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.10 for local network 172.17.1.0/24.\\nPing to 172.17.1.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.12 for local network 172.17.2.0/24.\\nPing to 172.17.2.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\\nPing to 172.17.3.11 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:25,245] (heat-config) [DEBUG] [2018-06-22 04:50:24,750] (heat-config) [INFO] ping_test_ips=172.17.3.11 172.17.4.19 172.17.1.10 172.17.2.12 10.0.0.111 192.168.24.12", > "[2018-06-22 04:50:24,750] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-22 04:50:24,750] (heat-config) [INFO] validate_ntp=True", > "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", > "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-oqwugjymippq-0-2oftrr7exndm/f0241a3c-f8a1-481d-9fff-4aad0d842071", > "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:50:24,750] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:50:24,750] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7356ef3e-6a19-4398-884e-a7c32afea4cc", > "[2018-06-22 04:50:25,241] (heat-config) [INFO] Trying to ping 172.17.1.10 for local network 172.17.1.0/24.", > "Ping to 172.17.1.10 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.12 for local network 172.17.2.0/24.", > "Ping to 172.17.2.12 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.11 for local network 172.17.3.0/24.", > "Ping to 172.17.3.11 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", > "Ping to 192.168.24.12 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "SUCCESS", > "", > "[2018-06-22 04:50:25,241] (heat-config) [DEBUG] ", > "[2018-06-22 04:50:25,241] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7356ef3e-6a19-4398-884e-a7c32afea4cc", > "", > "[2018-06-22 04:50:25,245] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:50:25,246] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.json < /var/lib/heat-config/deployed/7356ef3e-6a19-4398-884e-a7c32afea4cc.notify.json", > "[2018-06-22 04:50:25,624] (heat-config) [INFO] ", > "[2018-06-22 04:50:25,625] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:25,721 p=11115 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesValidationDeployment] ******* >2018-06-22 04:50:25,736 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:25,754 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:25,804 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "bafec981-93c4-4c73-8763-d7372ddea79c"}, "changed": false} >2018-06-22 04:50:25,822 p=11115 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy] *********************** >2018-06-22 04:50:26,456 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "7b1b73cb547a0be2a354e7dd6b4eb483312c8445", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeArtifactsDeploy-bafec981-93c4-4c73-8763-d7372ddea79c", "gid": 0, "group": "root", "md5sum": "4abc981959c761ff46923662d3286604", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2015, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657425.87-215543787695238/source", "state": "file", "uid": 0} >2018-06-22 04:50:26,475 p=11115 u=mistral | TASK [Check if deployed file exists for ComputeArtifactsDeploy] **************** >2018-06-22 04:50:26,797 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:26,815 p=11115 u=mistral | TASK [Check previous deployment rc for ComputeArtifactsDeploy] ***************** >2018-06-22 04:50:26,832 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:26,851 p=11115 u=mistral | TASK [Remove deployed file for ComputeArtifactsDeploy when previous deployment failed] *** >2018-06-22 04:50:26,868 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:26,887 p=11115 u=mistral | TASK [Force remove deployed file for ComputeArtifactsDeploy] ******************* >2018-06-22 04:50:26,902 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:26,923 p=11115 u=mistral | TASK [Run deployment ComputeArtifactsDeploy] *********************************** >2018-06-22 04:50:27,692 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.notify.json)", "delta": "0:00:00.449431", "end": "2018-06-22 04:50:27.697566", "rc": 0, "start": "2018-06-22 04:50:27.248135", "stderr": "[2018-06-22 04:50:27,271] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.json\n[2018-06-22 04:50:27,300] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:27,300] (heat-config) [DEBUG] [2018-06-22 04:50:27,291] (heat-config) [INFO] artifact_urls=\n[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9\n[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-lyl23itvojuz-ComputeArtifactsDeploy-avde4n2axvxv-0-5julbara32jw/32443905-dce9-417a-9ffb-509b3ba3448c\n[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:50:27,292] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bafec981-93c4-4c73-8763-d7372ddea79c\n[2018-06-22 04:50:27,297] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-22 04:50:27,297] (heat-config) [DEBUG] \n[2018-06-22 04:50:27,297] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bafec981-93c4-4c73-8763-d7372ddea79c\n\n[2018-06-22 04:50:27,300] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:50:27,301] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.json < /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.notify.json\n[2018-06-22 04:50:27,691] (heat-config) [INFO] \n[2018-06-22 04:50:27,691] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:27,271] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.json", "[2018-06-22 04:50:27,300] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:27,300] (heat-config) [DEBUG] [2018-06-22 04:50:27,291] (heat-config) [INFO] artifact_urls=", "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-lyl23itvojuz-ComputeArtifactsDeploy-avde4n2axvxv-0-5julbara32jw/32443905-dce9-417a-9ffb-509b3ba3448c", "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:50:27,292] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bafec981-93c4-4c73-8763-d7372ddea79c", "[2018-06-22 04:50:27,297] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-22 04:50:27,297] (heat-config) [DEBUG] ", "[2018-06-22 04:50:27,297] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bafec981-93c4-4c73-8763-d7372ddea79c", "", "[2018-06-22 04:50:27,300] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:50:27,301] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.json < /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.notify.json", "[2018-06-22 04:50:27,691] (heat-config) [INFO] ", "[2018-06-22 04:50:27,691] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:27,712 p=11115 u=mistral | TASK [Output for ComputeArtifactsDeploy] *************************************** >2018-06-22 04:50:27,814 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:27,271] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.json", > "[2018-06-22 04:50:27,300] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:27,300] (heat-config) [DEBUG] [2018-06-22 04:50:27,291] (heat-config) [INFO] artifact_urls=", > "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_server_id=873c916c-1df4-487f-9ebb-a2c81aa5dfd9", > "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-lyl23itvojuz-ComputeArtifactsDeploy-avde4n2axvxv-0-5julbara32jw/32443905-dce9-417a-9ffb-509b3ba3448c", > "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:50:27,292] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:50:27,292] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bafec981-93c4-4c73-8763-d7372ddea79c", > "[2018-06-22 04:50:27,297] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-22 04:50:27,297] (heat-config) [DEBUG] ", > "[2018-06-22 04:50:27,297] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bafec981-93c4-4c73-8763-d7372ddea79c", > "", > "[2018-06-22 04:50:27,300] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:50:27,301] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.json < /var/lib/heat-config/deployed/bafec981-93c4-4c73-8763-d7372ddea79c.notify.json", > "[2018-06-22 04:50:27,691] (heat-config) [INFO] ", > "[2018-06-22 04:50:27,691] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:27,835 p=11115 u=mistral | TASK [Check-mode for Run deployment ComputeArtifactsDeploy] ******************** >2018-06-22 04:50:27,850 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:27,868 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:28,002 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "2f86a0bd-0797-4946-8a07-c15d95c31858"}, "changed": false} >2018-06-22 04:50:28,021 p=11115 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment] ******************** >2018-06-22 04:50:28,700 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "e89a9316fef425f9da880fdb57ceaa5bf1cdffec", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostPrepDeployment-2f86a0bd-0797-4946-8a07-c15d95c31858", "gid": 0, "group": "root", "md5sum": "35006818d3209b1dc681a662d47bb717", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 33672, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657428.15-185955546956888/source", "state": "file", "uid": 0} >2018-06-22 04:50:28,719 p=11115 u=mistral | TASK [Check if deployed file exists for ComputeHostPrepDeployment] ************* >2018-06-22 04:50:29,083 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:29,104 p=11115 u=mistral | TASK [Check previous deployment rc for ComputeHostPrepDeployment] ************** >2018-06-22 04:50:29,121 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:29,180 p=11115 u=mistral | TASK [Remove deployed file for ComputeHostPrepDeployment when previous deployment failed] *** >2018-06-22 04:50:29,198 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:29,217 p=11115 u=mistral | TASK [Force remove deployed file for ComputeHostPrepDeployment] **************** >2018-06-22 04:50:29,233 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:29,252 p=11115 u=mistral | TASK [Run deployment ComputeHostPrepDeployment] ******************************** >2018-06-22 04:50:38,793 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.notify.json)", "delta": "0:00:09.212268", "end": "2018-06-22 04:50:38.792273", "rc": 0, "start": "2018-06-22 04:50:29.580005", "stderr": "[2018-06-22 04:50:29,604] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.json\n[2018-06-22 04:50:38,423] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:38,423] (heat-config) [DEBUG] [2018-06-22 04:50:29,626] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/2f86a0bd-0797-4946-8a07-c15d95c31858_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/2f86a0bd-0797-4946-8a07-c15d95c31858_variables.json\n[2018-06-22 04:50:38,419] (heat-config) [INFO] Return code 0\n[2018-06-22 04:50:38,419] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [Mount Nova NFS Share] ****************************************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/nova)\nok: [localhost] => (item=/var/lib/libvirt)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [is Instance HA enabled] **************************************************\nok: [localhost]\n\nTASK [prepare Instance HA script directory] ************************************\nskipping: [localhost]\n\nTASK [install Instance HA script that runs nova-compute] ***********************\nskipping: [localhost]\n\nTASK [Get list of instance HA compute nodes] ***********************************\nskipping: [localhost]\n\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\nskipping: [localhost]\n\nTASK [create libvirt persistent data directories] ******************************\nok: [localhost] => (item=/etc/libvirt)\nok: [localhost] => (item=/etc/libvirt/secrets)\nok: [localhost] => (item=/etc/libvirt/qemu)\nok: [localhost] => (item=/var/lib/libvirt)\nchanged: [localhost] => (item=/var/log/containers/libvirt)\n\nTASK [ensure qemu group is present on the host] ********************************\nok: [localhost]\n\nTASK [ensure qemu user is present on the host] *********************************\nok: [localhost]\n\nTASK [create directory for vhost-user sockets with qemu ownership] *************\nchanged: [localhost]\n\nTASK [check if libvirt is installed] *******************************************\nchanged: [localhost]\n\nTASK [make sure libvirt services are disabled] *********************************\nchanged: [localhost] => (item=libvirtd.service)\nchanged: [localhost] => (item=virtlogd.socket)\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \n\n\n[2018-06-22 04:50:38,419] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\ncan add warn=False to this command task or set command_warnings=False in\nansible.cfg to get rid of this message.\n\n[2018-06-22 04:50:38,419] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/2f86a0bd-0797-4946-8a07-c15d95c31858_playbook.yaml\n\n[2018-06-22 04:50:38,423] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-22 04:50:38,424] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.json < /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.notify.json\n[2018-06-22 04:50:38,786] (heat-config) [INFO] \n[2018-06-22 04:50:38,786] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:29,604] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.json", "[2018-06-22 04:50:38,423] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:38,423] (heat-config) [DEBUG] [2018-06-22 04:50:29,626] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/2f86a0bd-0797-4946-8a07-c15d95c31858_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/2f86a0bd-0797-4946-8a07-c15d95c31858_variables.json", "[2018-06-22 04:50:38,419] (heat-config) [INFO] Return code 0", "[2018-06-22 04:50:38,419] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [Mount Nova NFS Share] ****************************************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/nova)", "ok: [localhost] => (item=/var/lib/libvirt)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [is Instance HA enabled] **************************************************", "ok: [localhost]", "", "TASK [prepare Instance HA script directory] ************************************", "skipping: [localhost]", "", "TASK [install Instance HA script that runs nova-compute] ***********************", "skipping: [localhost]", "", "TASK [Get list of instance HA compute nodes] ***********************************", "skipping: [localhost]", "", "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", "skipping: [localhost]", "", "TASK [create libvirt persistent data directories] ******************************", "ok: [localhost] => (item=/etc/libvirt)", "ok: [localhost] => (item=/etc/libvirt/secrets)", "ok: [localhost] => (item=/etc/libvirt/qemu)", "ok: [localhost] => (item=/var/lib/libvirt)", "changed: [localhost] => (item=/var/log/containers/libvirt)", "", "TASK [ensure qemu group is present on the host] ********************************", "ok: [localhost]", "", "TASK [ensure qemu user is present on the host] *********************************", "ok: [localhost]", "", "TASK [create directory for vhost-user sockets with qemu ownership] *************", "changed: [localhost]", "", "TASK [check if libvirt is installed] *******************************************", "changed: [localhost]", "", "TASK [make sure libvirt services are disabled] *********************************", "changed: [localhost] => (item=libvirtd.service)", "changed: [localhost] => (item=virtlogd.socket)", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=20 changed=12 unreachable=0 failed=0 ", "", "", "[2018-06-22 04:50:38,419] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", "rpm. If you need to use command because yum, dnf or zypper is insufficient you", "can add warn=False to this command task or set command_warnings=False in", "ansible.cfg to get rid of this message.", "", "[2018-06-22 04:50:38,419] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/2f86a0bd-0797-4946-8a07-c15d95c31858_playbook.yaml", "", "[2018-06-22 04:50:38,423] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-22 04:50:38,424] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.json < /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.notify.json", "[2018-06-22 04:50:38,786] (heat-config) [INFO] ", "[2018-06-22 04:50:38,786] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:38,815 p=11115 u=mistral | TASK [Output for ComputeHostPrepDeployment] ************************************ >2018-06-22 04:50:38,870 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:29,604] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.json", > "[2018-06-22 04:50:38,423] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:38,423] (heat-config) [DEBUG] [2018-06-22 04:50:29,626] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/2f86a0bd-0797-4946-8a07-c15d95c31858_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/2f86a0bd-0797-4946-8a07-c15d95c31858_variables.json", > "[2018-06-22 04:50:38,419] (heat-config) [INFO] Return code 0", > "[2018-06-22 04:50:38,419] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [Mount Nova NFS Share] ****************************************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/nova)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [is Instance HA enabled] **************************************************", > "ok: [localhost]", > "", > "TASK [prepare Instance HA script directory] ************************************", > "skipping: [localhost]", > "", > "TASK [install Instance HA script that runs nova-compute] ***********************", > "skipping: [localhost]", > "", > "TASK [Get list of instance HA compute nodes] ***********************************", > "skipping: [localhost]", > "", > "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", > "skipping: [localhost]", > "", > "TASK [create libvirt persistent data directories] ******************************", > "ok: [localhost] => (item=/etc/libvirt)", > "ok: [localhost] => (item=/etc/libvirt/secrets)", > "ok: [localhost] => (item=/etc/libvirt/qemu)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "changed: [localhost] => (item=/var/log/containers/libvirt)", > "", > "TASK [ensure qemu group is present on the host] ********************************", > "ok: [localhost]", > "", > "TASK [ensure qemu user is present on the host] *********************************", > "ok: [localhost]", > "", > "TASK [create directory for vhost-user sockets with qemu ownership] *************", > "changed: [localhost]", > "", > "TASK [check if libvirt is installed] *******************************************", > "changed: [localhost]", > "", > "TASK [make sure libvirt services are disabled] *********************************", > "changed: [localhost] => (item=libvirtd.service)", > "changed: [localhost] => (item=virtlogd.socket)", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=20 changed=12 unreachable=0 failed=0 ", > "", > "", > "[2018-06-22 04:50:38,419] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", > "rpm. If you need to use command because yum, dnf or zypper is insufficient you", > "can add warn=False to this command task or set command_warnings=False in", > "ansible.cfg to get rid of this message.", > "", > "[2018-06-22 04:50:38,419] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/2f86a0bd-0797-4946-8a07-c15d95c31858_playbook.yaml", > "", > "[2018-06-22 04:50:38,423] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-22 04:50:38,424] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.json < /var/lib/heat-config/deployed/2f86a0bd-0797-4946-8a07-c15d95c31858.notify.json", > "[2018-06-22 04:50:38,786] (heat-config) [INFO] ", > "[2018-06-22 04:50:38,786] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:38,892 p=11115 u=mistral | TASK [Check-mode for Run deployment ComputeHostPrepDeployment] ***************** >2018-06-22 04:50:38,906 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:38,929 p=11115 u=mistral | TASK [include] ***************************************************************** >2018-06-22 04:50:39,016 p=11115 u=mistral | TASK [include] ***************************************************************** >2018-06-22 04:50:39,109 p=11115 u=mistral | TASK [include] ***************************************************************** >2018-06-22 04:50:39,345 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/CephStorage/deployments.yaml for ceph-0 >2018-06-22 04:50:39,354 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/CephStorage/deployments.yaml for ceph-0 >2018-06-22 04:50:39,362 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/CephStorage/deployments.yaml for ceph-0 >2018-06-22 04:50:39,371 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/CephStorage/deployments.yaml for ceph-0 >2018-06-22 04:50:39,379 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/CephStorage/deployments.yaml for ceph-0 >2018-06-22 04:50:39,388 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/CephStorage/deployments.yaml for ceph-0 >2018-06-22 04:50:39,396 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/CephStorage/deployments.yaml for ceph-0 >2018-06-22 04:50:39,405 p=11115 u=mistral | included: /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/CephStorage/deployments.yaml for ceph-0 >2018-06-22 04:50:39,472 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:39,532 p=11115 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "67d509c0-67a8-4aaf-af1b-e77221b1413e"}, "changed": false} >2018-06-22 04:50:39,551 p=11115 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-22 04:50:40,136 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "42997d328240b46d41856e19bd1c39242465d9f9", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-67d509c0-67a8-4aaf-af1b-e77221b1413e", "gid": 0, "group": "root", "md5sum": "0d91b3481d732df220b09fceea4dafb2", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8777, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657439.61-74604614919004/source", "state": "file", "uid": 0} >2018-06-22 04:50:40,155 p=11115 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-22 04:50:40,461 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:40,481 p=11115 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-22 04:50:40,498 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:40,516 p=11115 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-22 04:50:40,534 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:40,552 p=11115 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-22 04:50:40,569 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:40,589 p=11115 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-22 04:50:55,844 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.notify.json)", "delta": "0:00:14.934336", "end": "2018-06-22 04:50:55.834611", "rc": 0, "start": "2018-06-22 04:50:40.900275", "stderr": "[2018-06-22 04:50:40,924] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.json\n[2018-06-22 04:50:55,449] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 04:50:41 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 04:50:41 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 04:50:41 AM] [INFO] Not using any mapping file.\\n[2018/06/22 04:50:41 AM] [INFO] Finding active nics\\n[2018/06/22 04:50:41 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 04:50:41 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 04:50:41 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 04:50:41 AM] [INFO] lo is not an active nic\\n[2018/06/22 04:50:41 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 04:50:41 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 04:50:41 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 04:50:41 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 04:50:41 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 04:50:41 AM] [INFO] adding interface: eth0\\n[2018/06/22 04:50:41 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 04:50:41 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] adding interface: eth1\\n[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 04:50:41 AM] [INFO] applying network configs...\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 04:50:41 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 04:50:46 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:50 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:55,449] (heat-config) [DEBUG] [2018-06-22 04:50:40,946] (heat-config) [INFO] interface_name=nic1\n[2018-06-22 04:50:40,946] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c\n[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-ouduf67iilmp-0-wtivtksddghi-NetworkDeployment-ykpenzpavy5p-TripleOSoftwareDeployment-66zvzfrurnvm/b37f3911-f903-458e-8bb1-ff654a892731\n[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:50:40,947] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/67d509c0-67a8-4aaf-af1b-e77221b1413e\n[2018-06-22 04:50:55,445] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-22 04:50:55,445] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/22 04:50:41 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/22 04:50:41 AM] [INFO] Ifcfg net config provider created.\n[2018/06/22 04:50:41 AM] [INFO] Not using any mapping file.\n[2018/06/22 04:50:41 AM] [INFO] Finding active nics\n[2018/06/22 04:50:41 AM] [INFO] eth2 is an embedded active nic\n[2018/06/22 04:50:41 AM] [INFO] eth1 is an embedded active nic\n[2018/06/22 04:50:41 AM] [INFO] eth0 is an embedded active nic\n[2018/06/22 04:50:41 AM] [INFO] lo is not an active nic\n[2018/06/22 04:50:41 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/22 04:50:41 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/22 04:50:41 AM] [INFO] nic3 mapped to: eth2\n[2018/06/22 04:50:41 AM] [INFO] nic2 mapped to: eth1\n[2018/06/22 04:50:41 AM] [INFO] nic1 mapped to: eth0\n[2018/06/22 04:50:41 AM] [INFO] adding interface: eth0\n[2018/06/22 04:50:41 AM] [INFO] adding custom route for interface: eth0\n[2018/06/22 04:50:41 AM] [INFO] adding bridge: br-isolated\n[2018/06/22 04:50:41 AM] [INFO] adding interface: eth1\n[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan30\n[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan40\n[2018/06/22 04:50:41 AM] [INFO] applying network configs...\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth1\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth0\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/22 04:50:41 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth1\n[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth0\n[2018/06/22 04:50:46 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 04:50:50 AM] [INFO] running ifup on interface: vlan40\n[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan40\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-22 04:50:55,445] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/67d509c0-67a8-4aaf-af1b-e77221b1413e\n\n[2018-06-22 04:50:55,449] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:50:55,449] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.json < /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.notify.json\n[2018-06-22 04:50:55,828] (heat-config) [INFO] \n[2018-06-22 04:50:55,828] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:40,924] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.json", "[2018-06-22 04:50:55,449] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 04:50:41 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 04:50:41 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 04:50:41 AM] [INFO] Not using any mapping file.\\n[2018/06/22 04:50:41 AM] [INFO] Finding active nics\\n[2018/06/22 04:50:41 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 04:50:41 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 04:50:41 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 04:50:41 AM] [INFO] lo is not an active nic\\n[2018/06/22 04:50:41 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 04:50:41 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 04:50:41 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 04:50:41 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 04:50:41 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 04:50:41 AM] [INFO] adding interface: eth0\\n[2018/06/22 04:50:41 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 04:50:41 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] adding interface: eth1\\n[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 04:50:41 AM] [INFO] applying network configs...\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 04:50:41 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 04:50:46 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:50 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:55,449] (heat-config) [DEBUG] [2018-06-22 04:50:40,946] (heat-config) [INFO] interface_name=nic1", "[2018-06-22 04:50:40,946] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-ouduf67iilmp-0-wtivtksddghi-NetworkDeployment-ykpenzpavy5p-TripleOSoftwareDeployment-66zvzfrurnvm/b37f3911-f903-458e-8bb1-ff654a892731", "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:50:40,947] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/67d509c0-67a8-4aaf-af1b-e77221b1413e", "[2018-06-22 04:50:55,445] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-22 04:50:55,445] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/22 04:50:41 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/22 04:50:41 AM] [INFO] Ifcfg net config provider created.", "[2018/06/22 04:50:41 AM] [INFO] Not using any mapping file.", "[2018/06/22 04:50:41 AM] [INFO] Finding active nics", "[2018/06/22 04:50:41 AM] [INFO] eth2 is an embedded active nic", "[2018/06/22 04:50:41 AM] [INFO] eth1 is an embedded active nic", "[2018/06/22 04:50:41 AM] [INFO] eth0 is an embedded active nic", "[2018/06/22 04:50:41 AM] [INFO] lo is not an active nic", "[2018/06/22 04:50:41 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/22 04:50:41 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/22 04:50:41 AM] [INFO] nic3 mapped to: eth2", "[2018/06/22 04:50:41 AM] [INFO] nic2 mapped to: eth1", "[2018/06/22 04:50:41 AM] [INFO] nic1 mapped to: eth0", "[2018/06/22 04:50:41 AM] [INFO] adding interface: eth0", "[2018/06/22 04:50:41 AM] [INFO] adding custom route for interface: eth0", "[2018/06/22 04:50:41 AM] [INFO] adding bridge: br-isolated", "[2018/06/22 04:50:41 AM] [INFO] adding interface: eth1", "[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan30", "[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan40", "[2018/06/22 04:50:41 AM] [INFO] applying network configs...", "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth1", "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth0", "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/22 04:50:41 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/22 04:50:41 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth1", "[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth0", "[2018/06/22 04:50:46 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 04:50:50 AM] [INFO] running ifup on interface: vlan40", "[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan40", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-22 04:50:55,445] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/67d509c0-67a8-4aaf-af1b-e77221b1413e", "", "[2018-06-22 04:50:55,449] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:50:55,449] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.json < /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.notify.json", "[2018-06-22 04:50:55,828] (heat-config) [INFO] ", "[2018-06-22 04:50:55,828] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:55,865 p=11115 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-22 04:50:55,924 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:40,924] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.json", > "[2018-06-22 04:50:55,449] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 04:50:41 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 04:50:41 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 04:50:41 AM] [INFO] Not using any mapping file.\\n[2018/06/22 04:50:41 AM] [INFO] Finding active nics\\n[2018/06/22 04:50:41 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 04:50:41 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 04:50:41 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 04:50:41 AM] [INFO] lo is not an active nic\\n[2018/06/22 04:50:41 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 04:50:41 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 04:50:41 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 04:50:41 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 04:50:41 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 04:50:41 AM] [INFO] adding interface: eth0\\n[2018/06/22 04:50:41 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 04:50:41 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] adding interface: eth1\\n[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 04:50:41 AM] [INFO] applying network configs...\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 04:50:41 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 04:50:41 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 04:50:46 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:50 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:55,449] (heat-config) [DEBUG] [2018-06-22 04:50:40,946] (heat-config) [INFO] interface_name=nic1", > "[2018-06-22 04:50:40,946] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", > "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-ouduf67iilmp-0-wtivtksddghi-NetworkDeployment-ykpenzpavy5p-TripleOSoftwareDeployment-66zvzfrurnvm/b37f3911-f903-458e-8bb1-ff654a892731", > "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:50:40,946] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:50:40,947] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/67d509c0-67a8-4aaf-af1b-e77221b1413e", > "[2018-06-22 04:50:55,445] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-22 04:50:55,445] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/22 04:50:41 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/22 04:50:41 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/22 04:50:41 AM] [INFO] Not using any mapping file.", > "[2018/06/22 04:50:41 AM] [INFO] Finding active nics", > "[2018/06/22 04:50:41 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/22 04:50:41 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/22 04:50:41 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/22 04:50:41 AM] [INFO] lo is not an active nic", > "[2018/06/22 04:50:41 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/22 04:50:41 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/22 04:50:41 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/22 04:50:41 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/22 04:50:41 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/22 04:50:41 AM] [INFO] adding interface: eth0", > "[2018/06/22 04:50:41 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/22 04:50:41 AM] [INFO] adding bridge: br-isolated", > "[2018/06/22 04:50:41 AM] [INFO] adding interface: eth1", > "[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan30", > "[2018/06/22 04:50:41 AM] [INFO] adding vlan: vlan40", > "[2018/06/22 04:50:41 AM] [INFO] applying network configs...", > "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 04:50:41 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/22 04:50:41 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/22 04:50:41 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/22 04:50:41 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth1", > "[2018/06/22 04:50:42 AM] [INFO] running ifup on interface: eth0", > "[2018/06/22 04:50:46 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 04:50:50 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 04:50:54 AM] [INFO] running ifup on interface: vlan40", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-22 04:50:55,445] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/67d509c0-67a8-4aaf-af1b-e77221b1413e", > "", > "[2018-06-22 04:50:55,449] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:50:55,449] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.json < /var/lib/heat-config/deployed/67d509c0-67a8-4aaf-af1b-e77221b1413e.notify.json", > "[2018-06-22 04:50:55,828] (heat-config) [INFO] ", > "[2018-06-22 04:50:55,828] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:55,947 p=11115 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-22 04:50:55,964 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:55,982 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:56,034 p=11115 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "577fe0b1-e7c6-4e2b-8578-fd665f846d7f"}, "changed": false} >2018-06-22 04:50:56,053 p=11115 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment] ************* >2018-06-22 04:50:56,611 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "2e582329b2f2d94d74a16c810a55dec6bfebeca2", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageUpgradeInitDeployment-577fe0b1-e7c6-4e2b-8578-fd665f846d7f", "gid": 0, "group": "root", "md5sum": "fadb145ed682f7d86509b12c15646779", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1186, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657456.1-233172840348211/source", "state": "file", "uid": 0} >2018-06-22 04:50:56,630 p=11115 u=mistral | TASK [Check if deployed file exists for CephStorageUpgradeInitDeployment] ****** >2018-06-22 04:50:56,925 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:56,946 p=11115 u=mistral | TASK [Check previous deployment rc for CephStorageUpgradeInitDeployment] ******* >2018-06-22 04:50:56,964 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:56,982 p=11115 u=mistral | TASK [Remove deployed file for CephStorageUpgradeInitDeployment when previous deployment failed] *** >2018-06-22 04:50:57,001 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:57,020 p=11115 u=mistral | TASK [Force remove deployed file for CephStorageUpgradeInitDeployment] ********* >2018-06-22 04:50:57,037 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:57,055 p=11115 u=mistral | TASK [Run deployment CephStorageUpgradeInitDeployment] ************************* >2018-06-22 04:50:57,825 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.notify.json)", "delta": "0:00:00.459473", "end": "2018-06-22 04:50:57.825134", "rc": 0, "start": "2018-06-22 04:50:57.365661", "stderr": "[2018-06-22 04:50:57,388] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.json\n[2018-06-22 04:50:57,414] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:57,414] (heat-config) [DEBUG] [2018-06-22 04:50:57,408] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c\n[2018-06-22 04:50:57,408] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:50:57,408] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-ouduf67iilmp-0-wtivtksddghi-CephStorageUpgradeInitDeployment-dsnh2ji4ebfm/6d529383-54a5-4b14-bde9-bba248fb608b\n[2018-06-22 04:50:57,409] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:50:57,409] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:50:57,409] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/577fe0b1-e7c6-4e2b-8578-fd665f846d7f\n[2018-06-22 04:50:57,411] (heat-config) [INFO] \n[2018-06-22 04:50:57,411] (heat-config) [DEBUG] \n[2018-06-22 04:50:57,411] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/577fe0b1-e7c6-4e2b-8578-fd665f846d7f\n\n[2018-06-22 04:50:57,414] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:50:57,414] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.json < /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.notify.json\n[2018-06-22 04:50:57,820] (heat-config) [INFO] \n[2018-06-22 04:50:57,820] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:57,388] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.json", "[2018-06-22 04:50:57,414] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:57,414] (heat-config) [DEBUG] [2018-06-22 04:50:57,408] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", "[2018-06-22 04:50:57,408] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:50:57,408] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-ouduf67iilmp-0-wtivtksddghi-CephStorageUpgradeInitDeployment-dsnh2ji4ebfm/6d529383-54a5-4b14-bde9-bba248fb608b", "[2018-06-22 04:50:57,409] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:50:57,409] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:50:57,409] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/577fe0b1-e7c6-4e2b-8578-fd665f846d7f", "[2018-06-22 04:50:57,411] (heat-config) [INFO] ", "[2018-06-22 04:50:57,411] (heat-config) [DEBUG] ", "[2018-06-22 04:50:57,411] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/577fe0b1-e7c6-4e2b-8578-fd665f846d7f", "", "[2018-06-22 04:50:57,414] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:50:57,414] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.json < /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.notify.json", "[2018-06-22 04:50:57,820] (heat-config) [INFO] ", "[2018-06-22 04:50:57,820] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:57,845 p=11115 u=mistral | TASK [Output for CephStorageUpgradeInitDeployment] ***************************** >2018-06-22 04:50:57,894 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:57,388] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.json", > "[2018-06-22 04:50:57,414] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:57,414] (heat-config) [DEBUG] [2018-06-22 04:50:57,408] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", > "[2018-06-22 04:50:57,408] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:50:57,408] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-ouduf67iilmp-0-wtivtksddghi-CephStorageUpgradeInitDeployment-dsnh2ji4ebfm/6d529383-54a5-4b14-bde9-bba248fb608b", > "[2018-06-22 04:50:57,409] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:50:57,409] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:50:57,409] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/577fe0b1-e7c6-4e2b-8578-fd665f846d7f", > "[2018-06-22 04:50:57,411] (heat-config) [INFO] ", > "[2018-06-22 04:50:57,411] (heat-config) [DEBUG] ", > "[2018-06-22 04:50:57,411] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/577fe0b1-e7c6-4e2b-8578-fd665f846d7f", > "", > "[2018-06-22 04:50:57,414] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:50:57,414] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.json < /var/lib/heat-config/deployed/577fe0b1-e7c6-4e2b-8578-fd665f846d7f.notify.json", > "[2018-06-22 04:50:57,820] (heat-config) [INFO] ", > "[2018-06-22 04:50:57,820] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:50:57,915 p=11115 u=mistral | TASK [Check-mode for Run deployment CephStorageUpgradeInitDeployment] ********** >2018-06-22 04:50:57,928 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:57,946 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:50:58,033 p=11115 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "eb0153a4-f044-44bb-ae51-9bb1dd0bab06"}, "changed": false} >2018-06-22 04:50:58,054 p=11115 u=mistral | TASK [Render deployment file for CephStorageDeployment] ************************ >2018-06-22 04:50:58,671 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "167c37a443d95ec87f29a8f9e5482d5a3f065af9", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageDeployment-eb0153a4-f044-44bb-ae51-9bb1dd0bab06", "gid": 0, "group": "root", "md5sum": "f9d8a164e4e36bb075d69200138cc0cb", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9062, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657458.15-154379682738943/source", "state": "file", "uid": 0} >2018-06-22 04:50:58,690 p=11115 u=mistral | TASK [Check if deployed file exists for CephStorageDeployment] ***************** >2018-06-22 04:50:59,002 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:50:59,021 p=11115 u=mistral | TASK [Check previous deployment rc for CephStorageDeployment] ****************** >2018-06-22 04:50:59,038 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:59,056 p=11115 u=mistral | TASK [Remove deployed file for CephStorageDeployment when previous deployment failed] *** >2018-06-22 04:50:59,072 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:59,090 p=11115 u=mistral | TASK [Force remove deployed file for CephStorageDeployment] ******************** >2018-06-22 04:50:59,107 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:50:59,125 p=11115 u=mistral | TASK [Run deployment CephStorageDeployment] ************************************ >2018-06-22 04:50:59,980 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.notify.json)", "delta": "0:00:00.547162", "end": "2018-06-22 04:50:59.982186", "rc": 0, "start": "2018-06-22 04:50:59.435024", "stderr": "[2018-06-22 04:50:59,459] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.json\n[2018-06-22 04:50:59,574] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:50:59,574] (heat-config) [DEBUG] \n[2018-06-22 04:50:59,574] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 04:50:59,575] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.json < /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.notify.json\n[2018-06-22 04:50:59,975] (heat-config) [INFO] \n[2018-06-22 04:50:59,975] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:50:59,459] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.json", "[2018-06-22 04:50:59,574] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:50:59,574] (heat-config) [DEBUG] ", "[2018-06-22 04:50:59,574] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 04:50:59,575] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.json < /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.notify.json", "[2018-06-22 04:50:59,975] (heat-config) [INFO] ", "[2018-06-22 04:50:59,975] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:50:59,999 p=11115 u=mistral | TASK [Output for CephStorageDeployment] **************************************** >2018-06-22 04:51:00,046 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:50:59,459] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.json", > "[2018-06-22 04:50:59,574] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:50:59,574] (heat-config) [DEBUG] ", > "[2018-06-22 04:50:59,574] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 04:50:59,575] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.json < /var/lib/heat-config/deployed/eb0153a4-f044-44bb-ae51-9bb1dd0bab06.notify.json", > "[2018-06-22 04:50:59,975] (heat-config) [INFO] ", > "[2018-06-22 04:50:59,975] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:51:00,065 p=11115 u=mistral | TASK [Check-mode for Run deployment CephStorageDeployment] ********************* >2018-06-22 04:51:00,079 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:00,096 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:51:00,147 p=11115 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "37344553-c624-4d62-bcae-63abec957017"}, "changed": false} >2018-06-22 04:51:00,167 p=11115 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment] ******************* >2018-06-22 04:51:00,719 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "a4b79456732fba60d0181011c218be8804dd061d", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostsDeployment-37344553-c624-4d62-bcae-63abec957017", "gid": 0, "group": "root", "md5sum": "88d44c32b9c4e763f4307308eec0f34e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4088, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657460.22-176834081950034/source", "state": "file", "uid": 0} >2018-06-22 04:51:00,740 p=11115 u=mistral | TASK [Check if deployed file exists for CephStorageHostsDeployment] ************ >2018-06-22 04:51:01,053 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:51:01,072 p=11115 u=mistral | TASK [Check previous deployment rc for CephStorageHostsDeployment] ************* >2018-06-22 04:51:01,089 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:01,108 p=11115 u=mistral | TASK [Remove deployed file for CephStorageHostsDeployment when previous deployment failed] *** >2018-06-22 04:51:01,124 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:01,145 p=11115 u=mistral | TASK [Force remove deployed file for CephStorageHostsDeployment] *************** >2018-06-22 04:51:01,161 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:01,180 p=11115 u=mistral | TASK [Run deployment CephStorageHostsDeployment] ******************************* >2018-06-22 04:51:01,960 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.notify.json)", "delta": "0:00:00.440695", "end": "2018-06-22 04:51:01.933521", "rc": 0, "start": "2018-06-22 04:51:01.492826", "stderr": "[2018-06-22 04:51:01,514] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.json\n[2018-06-22 04:51:01,546] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-22 04:51:01,546] (heat-config) [DEBUG] [2018-06-22 04:51:01,532] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c\n[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-thvwltflpoj7-0-sd2hczlz6yx4/6e26ca62-783a-4102-8d11-adbc2e088ddc\n[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:51:01,533] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/37344553-c624-4d62-bcae-63abec957017\n[2018-06-22 04:51:01,543] (heat-config) [INFO] \n[2018-06-22 04:51:01,543] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.16 overcloud.storage.localdomain\n172.17.4.16 overcloud.storagemgmt.localdomain\n172.17.1.11 overcloud.internalapi.localdomain\n10.0.0.106 overcloud.localdomain\n172.17.1.10 controller-0.localdomain controller-0\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.111 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.14 compute-0.localdomain compute-0\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.16 compute-0.external.localdomain compute-0.external\n192.168.24.16 compute-0.management.localdomain compute-0.management\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.17 ceph-0.localdomain ceph-0\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-22 04:51:01,543] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/37344553-c624-4d62-bcae-63abec957017\n\n[2018-06-22 04:51:01,546] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:51:01,547] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.json < /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.notify.json\n[2018-06-22 04:51:01,928] (heat-config) [INFO] \n[2018-06-22 04:51:01,928] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:51:01,514] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.json", "[2018-06-22 04:51:01,546] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-22 04:51:01,546] (heat-config) [DEBUG] [2018-06-22 04:51:01,532] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-thvwltflpoj7-0-sd2hczlz6yx4/6e26ca62-783a-4102-8d11-adbc2e088ddc", "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:51:01,533] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/37344553-c624-4d62-bcae-63abec957017", "[2018-06-22 04:51:01,543] (heat-config) [INFO] ", "[2018-06-22 04:51:01,543] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.16 overcloud.storage.localdomain", "172.17.4.16 overcloud.storagemgmt.localdomain", "172.17.1.11 overcloud.internalapi.localdomain", "10.0.0.106 overcloud.localdomain", "172.17.1.10 controller-0.localdomain controller-0", "172.17.3.11 controller-0.storage.localdomain controller-0.storage", "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.111 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.14 compute-0.localdomain compute-0", "172.17.3.13 compute-0.storage.localdomain compute-0.storage", "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.16 compute-0.external.localdomain compute-0.external", "192.168.24.16 compute-0.management.localdomain compute-0.management", "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.17 ceph-0.localdomain ceph-0", "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.13 ceph-0.external.localdomain ceph-0.external", "192.168.24.13 ceph-0.management.localdomain ceph-0.management", "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-22 04:51:01,543] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/37344553-c624-4d62-bcae-63abec957017", "", "[2018-06-22 04:51:01,546] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:51:01,547] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.json < /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.notify.json", "[2018-06-22 04:51:01,928] (heat-config) [INFO] ", "[2018-06-22 04:51:01,928] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:51:01,986 p=11115 u=mistral | TASK [Output for CephStorageHostsDeployment] *********************************** >2018-06-22 04:51:02,064 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:51:01,514] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.json", > "[2018-06-22 04:51:01,546] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.16 overcloud.storage.localdomain\\n172.17.4.16 overcloud.storagemgmt.localdomain\\n172.17.1.11 overcloud.internalapi.localdomain\\n10.0.0.106 overcloud.localdomain\\n172.17.1.10 controller-0.localdomain controller-0\\n172.17.3.11 controller-0.storage.localdomain controller-0.storage\\n172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.12 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.111 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.14 compute-0.localdomain compute-0\\n172.17.3.13 compute-0.storage.localdomain compute-0.storage\\n192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.15 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.16 compute-0.external.localdomain compute-0.external\\n192.168.24.16 compute-0.management.localdomain compute-0.management\\n192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.17 ceph-0.localdomain ceph-0\\n172.17.3.17 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.13 ceph-0.external.localdomain ceph-0.external\\n192.168.24.13 ceph-0.management.localdomain ceph-0.management\\n192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 04:51:01,546] (heat-config) [DEBUG] [2018-06-22 04:51:01,532] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", > "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-thvwltflpoj7-0-sd2hczlz6yx4/6e26ca62-783a-4102-8d11-adbc2e088ddc", > "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:51:01,533] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:51:01,533] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/37344553-c624-4d62-bcae-63abec957017", > "[2018-06-22 04:51:01,543] (heat-config) [INFO] ", > "[2018-06-22 04:51:01,543] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.16 overcloud.storage.localdomain", > "172.17.4.16 overcloud.storagemgmt.localdomain", > "172.17.1.11 overcloud.internalapi.localdomain", > "10.0.0.106 overcloud.localdomain", > "172.17.1.10 controller-0.localdomain controller-0", > "172.17.3.11 controller-0.storage.localdomain controller-0.storage", > "172.17.4.19 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.10 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.12 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.111 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.14 compute-0.localdomain compute-0", > "172.17.3.13 compute-0.storage.localdomain compute-0.storage", > "192.168.24.16 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.14 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.15 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.16 compute-0.external.localdomain compute-0.external", > "192.168.24.16 compute-0.management.localdomain compute-0.management", > "192.168.24.16 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.17 ceph-0.localdomain ceph-0", > "172.17.3.17 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.13 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.13 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.13 ceph-0.external.localdomain ceph-0.external", > "192.168.24.13 ceph-0.management.localdomain ceph-0.management", > "192.168.24.13 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-22 04:51:01,543] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/37344553-c624-4d62-bcae-63abec957017", > "", > "[2018-06-22 04:51:01,546] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:51:01,547] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.json < /var/lib/heat-config/deployed/37344553-c624-4d62-bcae-63abec957017.notify.json", > "[2018-06-22 04:51:01,928] (heat-config) [INFO] ", > "[2018-06-22 04:51:01,928] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:51:02,093 p=11115 u=mistral | TASK [Check-mode for Run deployment CephStorageHostsDeployment] **************** >2018-06-22 04:51:02,107 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:02,127 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:51:02,328 p=11115 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "d7af911b-6e11-436b-a229-2a9e5e1a9234"}, "changed": false} >2018-06-22 04:51:02,347 p=11115 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment] **************** >2018-06-22 04:51:03,086 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ffc670ae84b256a2ea89626c082999012f9962bd", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesDeployment-d7af911b-6e11-436b-a229-2a9e5e1a9234", "gid": 0, "group": "root", "md5sum": "637f2c0cb2741650120e11fbea597e95", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657462.54-280949466942040/source", "state": "file", "uid": 0} >2018-06-22 04:51:03,104 p=11115 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesDeployment] ********* >2018-06-22 04:51:03,475 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:51:03,495 p=11115 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesDeployment] ********** >2018-06-22 04:51:03,511 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:03,530 p=11115 u=mistral | TASK [Remove deployed file for CephStorageAllNodesDeployment when previous deployment failed] *** >2018-06-22 04:51:03,548 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:03,567 p=11115 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesDeployment] ************ >2018-06-22 04:51:03,583 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:03,601 p=11115 u=mistral | TASK [Run deployment CephStorageAllNodesDeployment] **************************** >2018-06-22 04:51:04,515 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.notify.json)", "delta": "0:00:00.544026", "end": "2018-06-22 04:51:04.511558", "rc": 0, "start": "2018-06-22 04:51:03.967532", "stderr": "[2018-06-22 04:51:03,993] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.json\n[2018-06-22 04:51:04,107] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:51:04,107] (heat-config) [DEBUG] \n[2018-06-22 04:51:04,107] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 04:51:04,108] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.json < /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.notify.json\n[2018-06-22 04:51:04,505] (heat-config) [INFO] \n[2018-06-22 04:51:04,505] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:51:03,993] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.json", "[2018-06-22 04:51:04,107] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:51:04,107] (heat-config) [DEBUG] ", "[2018-06-22 04:51:04,107] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 04:51:04,108] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.json < /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.notify.json", "[2018-06-22 04:51:04,505] (heat-config) [INFO] ", "[2018-06-22 04:51:04,505] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:51:04,534 p=11115 u=mistral | TASK [Output for CephStorageAllNodesDeployment] ******************************** >2018-06-22 04:51:04,586 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:51:03,993] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.json", > "[2018-06-22 04:51:04,107] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:51:04,107] (heat-config) [DEBUG] ", > "[2018-06-22 04:51:04,107] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 04:51:04,108] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.json < /var/lib/heat-config/deployed/d7af911b-6e11-436b-a229-2a9e5e1a9234.notify.json", > "[2018-06-22 04:51:04,505] (heat-config) [INFO] ", > "[2018-06-22 04:51:04,505] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:51:04,607 p=11115 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesDeployment] ************* >2018-06-22 04:51:04,622 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:04,640 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:51:04,701 p=11115 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "b0a9b19e-903b-4737-b9ca-ff42a8144af0"}, "changed": false} >2018-06-22 04:51:04,720 p=11115 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment] ****** >2018-06-22 04:51:05,316 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "85c57057807b61265476dfd79fd3ba5fe2b46fa4", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesValidationDeployment-b0a9b19e-903b-4737-b9ca-ff42a8144af0", "gid": 0, "group": "root", "md5sum": "87858f40d4e87bde19f50bc9e796d3f1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4943, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657464.78-186370975120115/source", "state": "file", "uid": 0} >2018-06-22 04:51:05,337 p=11115 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesValidationDeployment] *** >2018-06-22 04:51:05,650 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:51:05,669 p=11115 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesValidationDeployment] *** >2018-06-22 04:51:05,686 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:05,706 p=11115 u=mistral | TASK [Remove deployed file for CephStorageAllNodesValidationDeployment when previous deployment failed] *** >2018-06-22 04:51:05,723 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:05,741 p=11115 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesValidationDeployment] *** >2018-06-22 04:51:05,757 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:05,776 p=11115 u=mistral | TASK [Run deployment CephStorageAllNodesValidationDeployment] ****************** >2018-06-22 04:51:07,016 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.notify.json)", "delta": "0:00:00.935203", "end": "2018-06-22 04:51:07.016985", "rc": 0, "start": "2018-06-22 04:51:06.081782", "stderr": "[2018-06-22 04:51:06,103] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.json\n[2018-06-22 04:51:06,624] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.111 for local network 10.0.0.0/24.\\nPing to 10.0.0.111 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\\nPing to 172.17.3.11 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.19 for local network 172.17.4.0/24.\\nPing to 172.17.4.19 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:51:06,624] (heat-config) [DEBUG] [2018-06-22 04:51:06,123] (heat-config) [INFO] ping_test_ips=172.17.3.11 172.17.4.19 172.17.1.10 172.17.2.12 10.0.0.111 192.168.24.12\n[2018-06-22 04:51:06,123] (heat-config) [INFO] validate_fqdn=False\n[2018-06-22 04:51:06,123] (heat-config) [INFO] validate_ntp=True\n[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c\n[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-m6fqhy62ap7e-0-t466jeqavgib/7f8de3f6-cbb7-40f3-b326-6b6b1b540c33\n[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:51:06,123] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b0a9b19e-903b-4737-b9ca-ff42a8144af0\n[2018-06-22 04:51:06,621] (heat-config) [INFO] Trying to ping 10.0.0.111 for local network 10.0.0.0/24.\nPing to 10.0.0.111 succeeded.\nSUCCESS\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\nPing to 172.17.3.11 succeeded.\nSUCCESS\nTrying to ping 172.17.4.19 for local network 172.17.4.0/24.\nPing to 172.17.4.19 succeeded.\nSUCCESS\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\nPing to 192.168.24.12 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-06-22 04:51:06,621] (heat-config) [DEBUG] \n[2018-06-22 04:51:06,621] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b0a9b19e-903b-4737-b9ca-ff42a8144af0\n\n[2018-06-22 04:51:06,624] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:51:06,625] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.json < /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.notify.json\n[2018-06-22 04:51:07,011] (heat-config) [INFO] \n[2018-06-22 04:51:07,011] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:51:06,103] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.json", "[2018-06-22 04:51:06,624] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.111 for local network 10.0.0.0/24.\\nPing to 10.0.0.111 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\\nPing to 172.17.3.11 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.19 for local network 172.17.4.0/24.\\nPing to 172.17.4.19 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:51:06,624] (heat-config) [DEBUG] [2018-06-22 04:51:06,123] (heat-config) [INFO] ping_test_ips=172.17.3.11 172.17.4.19 172.17.1.10 172.17.2.12 10.0.0.111 192.168.24.12", "[2018-06-22 04:51:06,123] (heat-config) [INFO] validate_fqdn=False", "[2018-06-22 04:51:06,123] (heat-config) [INFO] validate_ntp=True", "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-m6fqhy62ap7e-0-t466jeqavgib/7f8de3f6-cbb7-40f3-b326-6b6b1b540c33", "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:51:06,123] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b0a9b19e-903b-4737-b9ca-ff42a8144af0", "[2018-06-22 04:51:06,621] (heat-config) [INFO] Trying to ping 10.0.0.111 for local network 10.0.0.0/24.", "Ping to 10.0.0.111 succeeded.", "SUCCESS", "Trying to ping 172.17.3.11 for local network 172.17.3.0/24.", "Ping to 172.17.3.11 succeeded.", "SUCCESS", "Trying to ping 172.17.4.19 for local network 172.17.4.0/24.", "Ping to 172.17.4.19 succeeded.", "SUCCESS", "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", "Ping to 192.168.24.12 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-06-22 04:51:06,621] (heat-config) [DEBUG] ", "[2018-06-22 04:51:06,621] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b0a9b19e-903b-4737-b9ca-ff42a8144af0", "", "[2018-06-22 04:51:06,624] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:51:06,625] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.json < /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.notify.json", "[2018-06-22 04:51:07,011] (heat-config) [INFO] ", "[2018-06-22 04:51:07,011] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:51:07,035 p=11115 u=mistral | TASK [Output for CephStorageAllNodesValidationDeployment] ********************** >2018-06-22 04:51:07,081 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:51:06,103] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.json", > "[2018-06-22 04:51:06,624] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.111 for local network 10.0.0.0/24.\\nPing to 10.0.0.111 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.11 for local network 172.17.3.0/24.\\nPing to 172.17.3.11 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.19 for local network 172.17.4.0/24.\\nPing to 172.17.4.19 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:51:06,624] (heat-config) [DEBUG] [2018-06-22 04:51:06,123] (heat-config) [INFO] ping_test_ips=172.17.3.11 172.17.4.19 172.17.1.10 172.17.2.12 10.0.0.111 192.168.24.12", > "[2018-06-22 04:51:06,123] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-22 04:51:06,123] (heat-config) [INFO] validate_ntp=True", > "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", > "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-m6fqhy62ap7e-0-t466jeqavgib/7f8de3f6-cbb7-40f3-b326-6b6b1b540c33", > "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:51:06,123] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:51:06,123] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b0a9b19e-903b-4737-b9ca-ff42a8144af0", > "[2018-06-22 04:51:06,621] (heat-config) [INFO] Trying to ping 10.0.0.111 for local network 10.0.0.0/24.", > "Ping to 10.0.0.111 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.11 for local network 172.17.3.0/24.", > "Ping to 172.17.3.11 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.19 for local network 172.17.4.0/24.", > "Ping to 172.17.4.19 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", > "Ping to 192.168.24.12 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-06-22 04:51:06,621] (heat-config) [DEBUG] ", > "[2018-06-22 04:51:06,621] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b0a9b19e-903b-4737-b9ca-ff42a8144af0", > "", > "[2018-06-22 04:51:06,624] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:51:06,625] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.json < /var/lib/heat-config/deployed/b0a9b19e-903b-4737-b9ca-ff42a8144af0.notify.json", > "[2018-06-22 04:51:07,011] (heat-config) [INFO] ", > "[2018-06-22 04:51:07,011] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:51:07,101 p=11115 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesValidationDeployment] *** >2018-06-22 04:51:07,116 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:07,133 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:51:07,196 p=11115 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "2718c86b-6229-493f-8dc4-ce862256c425"}, "changed": false} >2018-06-22 04:51:07,214 p=11115 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment] **************** >2018-06-22 04:51:07,777 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "74c32f94ffeab4f5df698cb7f2b4715659710732", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostPrepDeployment-2718c86b-6229-493f-8dc4-ce862256c425", "gid": 0, "group": "root", "md5sum": "11808aa4397dce99bb61872787f81d19", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19872, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657467.28-37326540328346/source", "state": "file", "uid": 0} >2018-06-22 04:51:07,797 p=11115 u=mistral | TASK [Check if deployed file exists for CephStorageHostPrepDeployment] ********* >2018-06-22 04:51:08,114 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:51:08,136 p=11115 u=mistral | TASK [Check previous deployment rc for CephStorageHostPrepDeployment] ********** >2018-06-22 04:51:08,154 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:08,173 p=11115 u=mistral | TASK [Remove deployed file for CephStorageHostPrepDeployment when previous deployment failed] *** >2018-06-22 04:51:08,190 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:08,208 p=11115 u=mistral | TASK [Force remove deployed file for CephStorageHostPrepDeployment] ************ >2018-06-22 04:51:08,226 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:08,244 p=11115 u=mistral | TASK [Run deployment CephStorageHostPrepDeployment] **************************** >2018-06-22 04:51:13,019 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.notify.json)", "delta": "0:00:04.457140", "end": "2018-06-22 04:51:13.022079", "rc": 0, "start": "2018-06-22 04:51:08.564939", "stderr": "[2018-06-22 04:51:08,589] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.json\n[2018-06-22 04:51:12,642] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:51:12,642] (heat-config) [DEBUG] [2018-06-22 04:51:08,611] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/2718c86b-6229-493f-8dc4-ce862256c425_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/2718c86b-6229-493f-8dc4-ce862256c425_variables.json\n[2018-06-22 04:51:12,638] (heat-config) [INFO] Return code 0\n[2018-06-22 04:51:12,639] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-06-22 04:51:12,639] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/2718c86b-6229-493f-8dc4-ce862256c425_playbook.yaml\n\n[2018-06-22 04:51:12,642] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-22 04:51:12,643] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.json < /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.notify.json\n[2018-06-22 04:51:13,016] (heat-config) [INFO] \n[2018-06-22 04:51:13,017] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:51:08,589] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.json", "[2018-06-22 04:51:12,642] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:51:12,642] (heat-config) [DEBUG] [2018-06-22 04:51:08,611] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/2718c86b-6229-493f-8dc4-ce862256c425_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/2718c86b-6229-493f-8dc4-ce862256c425_variables.json", "[2018-06-22 04:51:12,638] (heat-config) [INFO] Return code 0", "[2018-06-22 04:51:12,639] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-06-22 04:51:12,639] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/2718c86b-6229-493f-8dc4-ce862256c425_playbook.yaml", "", "[2018-06-22 04:51:12,642] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-22 04:51:12,643] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.json < /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.notify.json", "[2018-06-22 04:51:13,016] (heat-config) [INFO] ", "[2018-06-22 04:51:13,017] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:51:13,039 p=11115 u=mistral | TASK [Output for CephStorageHostPrepDeployment] ******************************** >2018-06-22 04:51:13,088 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:51:08,589] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.json", > "[2018-06-22 04:51:12,642] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:51:12,642] (heat-config) [DEBUG] [2018-06-22 04:51:08,611] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/2718c86b-6229-493f-8dc4-ce862256c425_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/2718c86b-6229-493f-8dc4-ce862256c425_variables.json", > "[2018-06-22 04:51:12,638] (heat-config) [INFO] Return code 0", > "[2018-06-22 04:51:12,639] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-06-22 04:51:12,639] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/2718c86b-6229-493f-8dc4-ce862256c425_playbook.yaml", > "", > "[2018-06-22 04:51:12,642] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-22 04:51:12,643] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.json < /var/lib/heat-config/deployed/2718c86b-6229-493f-8dc4-ce862256c425.notify.json", > "[2018-06-22 04:51:13,016] (heat-config) [INFO] ", > "[2018-06-22 04:51:13,017] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:51:13,107 p=11115 u=mistral | TASK [Check-mode for Run deployment CephStorageHostPrepDeployment] ************* >2018-06-22 04:51:13,120 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:13,138 p=11115 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 04:51:13,187 p=11115 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "e28cff72-529a-4b19-872c-bc327924f84e"}, "changed": false} >2018-06-22 04:51:13,206 p=11115 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy] ******************* >2018-06-22 04:51:13,777 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "b3d84af3371233f7d8e8f73d0d924b9fde086c16", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageArtifactsDeploy-e28cff72-529a-4b19-872c-bc327924f84e", "gid": 0, "group": "root", "md5sum": "3e973509f875c332398e9c7e98d8bb0d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2023, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657473.26-207151978218965/source", "state": "file", "uid": 0} >2018-06-22 04:51:13,798 p=11115 u=mistral | TASK [Check if deployed file exists for CephStorageArtifactsDeploy] ************ >2018-06-22 04:51:14,098 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:51:14,117 p=11115 u=mistral | TASK [Check previous deployment rc for CephStorageArtifactsDeploy] ************* >2018-06-22 04:51:14,134 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:14,154 p=11115 u=mistral | TASK [Remove deployed file for CephStorageArtifactsDeploy when previous deployment failed] *** >2018-06-22 04:51:14,170 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:14,189 p=11115 u=mistral | TASK [Force remove deployed file for CephStorageArtifactsDeploy] *************** >2018-06-22 04:51:14,205 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:14,224 p=11115 u=mistral | TASK [Run deployment CephStorageArtifactsDeploy] ******************************* >2018-06-22 04:51:14,956 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.notify.json)", "delta": "0:00:00.420238", "end": "2018-06-22 04:51:14.955330", "rc": 0, "start": "2018-06-22 04:51:14.535092", "stderr": "[2018-06-22 04:51:14,560] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.json\n[2018-06-22 04:51:14,586] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 04:51:14,587] (heat-config) [DEBUG] [2018-06-22 04:51:14,578] (heat-config) [INFO] artifact_urls=\n[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c\n[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-lyl23itvojuz-CephStorageArtifactsDeploy-wilpdltxz5pd-0-lsm2mtyjmwvg/532c6953-0ae5-46d2-a9a9-dd10eb3b4f4f\n[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 04:51:14,579] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e28cff72-529a-4b19-872c-bc327924f84e\n[2018-06-22 04:51:14,584] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-22 04:51:14,584] (heat-config) [DEBUG] \n[2018-06-22 04:51:14,584] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e28cff72-529a-4b19-872c-bc327924f84e\n\n[2018-06-22 04:51:14,587] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 04:51:14,587] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.json < /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.notify.json\n[2018-06-22 04:51:14,948] (heat-config) [INFO] \n[2018-06-22 04:51:14,949] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 04:51:14,560] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.json", "[2018-06-22 04:51:14,586] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 04:51:14,587] (heat-config) [DEBUG] [2018-06-22 04:51:14,578] (heat-config) [INFO] artifact_urls=", "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-lyl23itvojuz-CephStorageArtifactsDeploy-wilpdltxz5pd-0-lsm2mtyjmwvg/532c6953-0ae5-46d2-a9a9-dd10eb3b4f4f", "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 04:51:14,579] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e28cff72-529a-4b19-872c-bc327924f84e", "[2018-06-22 04:51:14,584] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-22 04:51:14,584] (heat-config) [DEBUG] ", "[2018-06-22 04:51:14,584] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e28cff72-529a-4b19-872c-bc327924f84e", "", "[2018-06-22 04:51:14,587] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 04:51:14,587] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.json < /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.notify.json", "[2018-06-22 04:51:14,948] (heat-config) [INFO] ", "[2018-06-22 04:51:14,949] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 04:51:14,979 p=11115 u=mistral | TASK [Output for CephStorageArtifactsDeploy] *********************************** >2018-06-22 04:51:15,030 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 04:51:14,560] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.json", > "[2018-06-22 04:51:14,586] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 04:51:14,587] (heat-config) [DEBUG] [2018-06-22 04:51:14,578] (heat-config) [INFO] artifact_urls=", > "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_server_id=33738b22-53b0-409c-8c2a-3518ad03958c", > "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-lyl23itvojuz-CephStorageArtifactsDeploy-wilpdltxz5pd-0-lsm2mtyjmwvg/532c6953-0ae5-46d2-a9a9-dd10eb3b4f4f", > "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 04:51:14,579] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 04:51:14,579] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e28cff72-529a-4b19-872c-bc327924f84e", > "[2018-06-22 04:51:14,584] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-22 04:51:14,584] (heat-config) [DEBUG] ", > "[2018-06-22 04:51:14,584] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e28cff72-529a-4b19-872c-bc327924f84e", > "", > "[2018-06-22 04:51:14,587] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 04:51:14,587] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.json < /var/lib/heat-config/deployed/e28cff72-529a-4b19-872c-bc327924f84e.notify.json", > "[2018-06-22 04:51:14,948] (heat-config) [INFO] ", > "[2018-06-22 04:51:14,949] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 04:51:15,051 p=11115 u=mistral | TASK [Check-mode for Run deployment CephStorageArtifactsDeploy] **************** >2018-06-22 04:51:15,066 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:15,072 p=11115 u=mistral | PLAY [Host prep steps] ********************************************************* >2018-06-22 04:51:15,112 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:15,173 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:15,174 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:15,192 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:15,196 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:15,476 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/aodh) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/aodh", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:15,797 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/aodh-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/aodh-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:15,822 p=11115 u=mistral | TASK [aodh logs readme] ******************************************************** >2018-06-22 04:51:15,879 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:15,894 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:16,440 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b6cf6dbe054f430c33d39c1a1a88593536d6e659", "msg": "Destination directory /var/log/aodh does not exist"} >2018-06-22 04:51:16,441 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:16,466 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:16,523 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:16,537 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:16,821 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:16,846 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:16,900 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:16,916 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:17,197 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:17,219 p=11115 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-06-22 04:51:17,272 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:17,287 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:17,842 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-06-22 04:51:17,842 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:17,864 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:17,920 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:17,921 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:17,937 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:17,942 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:18,281 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:18,592 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/cinder-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/cinder-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:18,617 p=11115 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-06-22 04:51:18,671 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:18,685 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:19,298 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292", "msg": "Destination directory /var/log/cinder does not exist"} >2018-06-22 04:51:19,299 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:19,323 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:19,435 p=11115 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:19,438 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:19,455 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:19,459 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:19,740 p=11115 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:20,051 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:20,074 p=11115 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-22 04:51:20,125 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:20,140 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:20,428 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:20,452 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:20,506 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:20,522 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:20,831 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:20,856 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:20,912 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:20,913 p=11115 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:20,924 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:20,928 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,211 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:21,524 p=11115 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:21,551 p=11115 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-06-22 04:51:21,607 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,608 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"cinder_enable_iscsi_backend": false}, "changed": false} >2018-06-22 04:51:21,620 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,642 p=11115 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-06-22 04:51:21,669 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,693 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,704 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,727 p=11115 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-06-22 04:51:21,756 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,778 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,789 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,810 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:21,859 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:21,880 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:22,156 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/glance) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/glance", "mode": "0755", "owner": "root", "path": "/var/log/containers/glance", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:22,178 p=11115 u=mistral | TASK [glance logs readme] ****************************************************** >2018-06-22 04:51:22,231 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:22,246 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:22,764 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "e368ae3272baeb19e1113009ea5dae00e797c919", "msg": "Destination directory /var/log/glance does not exist"} >2018-06-22 04:51:22,764 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:22,787 p=11115 u=mistral | TASK [set_fact] **************************************************************** >2018-06-22 04:51:22,813 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:22,835 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:22,848 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:22,870 p=11115 u=mistral | TASK [file] ******************************************************************** >2018-06-22 04:51:22,896 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:22,919 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:22,931 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:22,955 p=11115 u=mistral | TASK [stat] ******************************************************************** >2018-06-22 04:51:22,983 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,007 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,018 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,042 p=11115 u=mistral | TASK [copy] ******************************************************************** >2018-06-22 04:51:23,100 p=11115 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,101 p=11115 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,113 p=11115 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,135 p=11115 u=mistral | TASK [mount] ******************************************************************* >2018-06-22 04:51:23,166 p=11115 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,195 p=11115 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,209 p=11115 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,231 p=11115 u=mistral | TASK [Mount Node Staging Location] ********************************************* >2018-06-22 04:51:23,260 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,286 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,297 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,319 p=11115 u=mistral | TASK [Mount NFS on host] ******************************************************* >2018-06-22 04:51:23,347 p=11115 u=mistral | skipping: [controller-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,373 p=11115 u=mistral | skipping: [compute-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,386 p=11115 u=mistral | skipping: [ceph-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,410 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:23,463 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,464 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,477 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,483 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:23,744 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/gnocchi", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:24,061 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/gnocchi-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/gnocchi-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:24,084 p=11115 u=mistral | TASK [gnocchi logs readme] ***************************************************** >2018-06-22 04:51:24,134 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:24,148 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:24,720 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "2f6114e0f135d7222e70a07579ab0b2b6f967ff8", "msg": "Destination directory /var/log/gnocchi does not exist"} >2018-06-22 04:51:24,720 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:24,741 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:24,798 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:24,805 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,108 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:25,133 p=11115 u=mistral | TASK [get parameters] ********************************************************** >2018-06-22 04:51:25,185 p=11115 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:51:25,186 p=11115 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:51:25,197 p=11115 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:51:25,219 p=11115 u=mistral | TASK [get DeployedSSLCertificatePath attributes] ******************************* >2018-06-22 04:51:25,245 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,272 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,282 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,304 p=11115 u=mistral | TASK [Assign bootstrap node] *************************************************** >2018-06-22 04:51:25,331 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,357 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,369 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,393 p=11115 u=mistral | TASK [set is_bootstrap_node fact] ********************************************** >2018-06-22 04:51:25,447 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,448 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,459 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,480 p=11115 u=mistral | TASK [get haproxy status] ****************************************************** >2018-06-22 04:51:25,508 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,532 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,545 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,566 p=11115 u=mistral | TASK [get pacemaker status] **************************************************** >2018-06-22 04:51:25,593 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,615 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,632 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,657 p=11115 u=mistral | TASK [get docker status] ******************************************************* >2018-06-22 04:51:25,685 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,710 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,723 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,745 p=11115 u=mistral | TASK [get container_id] ******************************************************** >2018-06-22 04:51:25,771 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,794 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,805 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,826 p=11115 u=mistral | TASK [get pcs resource name for haproxy container] ***************************** >2018-06-22 04:51:25,853 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,879 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,890 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,911 p=11115 u=mistral | TASK [remove DeployedSSLCertificatePath if is dir] ***************************** >2018-06-22 04:51:25,942 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,968 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:25,980 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,002 p=11115 u=mistral | TASK [push certificate content] ************************************************ >2018-06-22 04:51:26,033 p=11115 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:51:26,068 p=11115 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:51:26,076 p=11115 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:51:26,097 p=11115 u=mistral | TASK [set certificate ownership] *********************************************** >2018-06-22 04:51:26,152 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,157 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,170 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,191 p=11115 u=mistral | TASK [reload haproxy if enabled] *********************************************** >2018-06-22 04:51:26,216 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,243 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,255 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,308 p=11115 u=mistral | TASK [restart pacemaker resource for haproxy] ********************************** >2018-06-22 04:51:26,337 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,361 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,373 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,394 p=11115 u=mistral | TASK [set kolla_dir fact] ****************************************************** >2018-06-22 04:51:26,421 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,447 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,458 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,479 p=11115 u=mistral | TASK [set certificate group on host via container] ***************************** >2018-06-22 04:51:26,504 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,527 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,538 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,560 p=11115 u=mistral | TASK [copy certificate from kolla directory to final location] ***************** >2018-06-22 04:51:26,590 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,618 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,628 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,649 p=11115 u=mistral | TASK [send restart order to haproxy container] ********************************* >2018-06-22 04:51:26,679 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,705 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,717 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,738 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:26,789 p=11115 u=mistral | skipping: [compute-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:26,804 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:27,088 p=11115 u=mistral | ok: [controller-0] => (item=/var/lib/haproxy) => {"changed": false, "gid": 188, "group": "haproxy", "item": "/var/lib/haproxy", "mode": "0755", "owner": "haproxy", "path": "/var/lib/haproxy", "secontext": "system_u:object_r:haproxy_var_lib_t:s0", "size": 6, "state": "directory", "uid": 188} >2018-06-22 04:51:27,110 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:27,164 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:27,165 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:27,177 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:27,186 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:27,456 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:27,764 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:27,788 p=11115 u=mistral | TASK [heat logs readme] ******************************************************** >2018-06-22 04:51:27,840 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:27,852 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:28,401 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "d30ca3bda176434d31659e7379616dd162ddb246", "msg": "Destination directory /var/log/heat does not exist"} >2018-06-22 04:51:28,401 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:28,426 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:28,480 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:28,480 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:28,492 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:28,497 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:28,776 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:29,085 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api-cfn", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api-cfn", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:29,107 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:29,155 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:29,169 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:29,444 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:29,465 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:29,514 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:29,520 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:29,534 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:29,539 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:29,817 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:30,128 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:30,150 p=11115 u=mistral | TASK [horizon logs readme] ***************************************************** >2018-06-22 04:51:30,201 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:30,215 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:30,767 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ac324739761cb36b925d6e309482e26f7fe49b91", "msg": "Destination directory /var/log/horizon does not exist"} >2018-06-22 04:51:30,768 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:30,791 p=11115 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-22 04:51:30,840 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:30,855 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:31,143 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1529657379.283401, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1529433183.0936344, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 5335882, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "18446744072695807771", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-22 04:51:31,166 p=11115 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-22 04:51:31,217 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:31,227 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:31,608 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "-.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "iscsid.service shutdown.target sockets.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127793", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-22 04:51:31,631 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:31,682 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:31,684 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:31,697 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:31,700 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:31,972 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:32,270 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:32,292 p=11115 u=mistral | TASK [keystone logs readme] **************************************************** >2018-06-22 04:51:32,344 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:32,355 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:32,893 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "910be882addb6df99267e9bd303f6d9bf658562e", "msg": "Destination directory /var/log/keystone does not exist"} >2018-06-22 04:51:32,894 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:32,917 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:32,970 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:32,982 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:33,250 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/memcached", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:33,273 p=11115 u=mistral | TASK [memcached logs readme] *************************************************** >2018-06-22 04:51:33,323 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:33,336 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:33,827 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "f72ee86fbe604c83734785fe970323e58e3fad9e", "dest": "/var/log/memcached-readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/memcached-readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 86, "state": "file", "uid": 0} >2018-06-22 04:51:33,851 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:33,906 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:33,907 p=11115 u=mistral | skipping: [compute-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:33,918 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:33,923 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:34,184 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/mysql) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/mysql", "mode": "0755", "owner": "root", "path": "/var/log/containers/mysql", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:34,456 p=11115 u=mistral | ok: [controller-0] => (item=/var/lib/mysql) => {"changed": false, "gid": 27, "group": "mysql", "item": "/var/lib/mysql", "mode": "0755", "owner": "mysql", "path": "/var/lib/mysql", "secontext": "system_u:object_r:mysqld_db_t:s0", "size": 6, "state": "directory", "uid": 27} >2018-06-22 04:51:34,480 p=11115 u=mistral | TASK [mysql logs readme] ******************************************************* >2018-06-22 04:51:34,538 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:34,551 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:35,010 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "de8fb5fe96200ab286121f8a09419702bd693743", "dest": "/var/log/mariadb/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/mariadb/readme.txt", "secontext": "system_u:object_r:mysqld_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-06-22 04:51:35,036 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:35,093 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:35,094 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:35,109 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:35,115 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:35,360 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:35,640 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/neutron-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/neutron-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:35,663 p=11115 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-06-22 04:51:35,718 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:35,730 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:36,221 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-06-22 04:51:36,221 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:36,244 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:36,296 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:36,312 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:36,567 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:36,594 p=11115 u=mistral | TASK [create /var/lib/neutron] ************************************************* >2018-06-22 04:51:36,649 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:36,662 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:36,962 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/neutron", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:36,985 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:37,043 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:37,044 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:37,059 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:37,064 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:37,359 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:37,639 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:37,663 p=11115 u=mistral | TASK [nova logs readme] ******************************************************** >2018-06-22 04:51:37,764 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:37,776 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:38,289 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-06-22 04:51:38,289 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:38,312 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:38,364 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:38,376 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:38,637 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:38,661 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:38,713 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:38,714 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:38,726 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:38,731 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:38,998 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:39,295 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-placement", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-placement", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:39,319 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:39,370 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:39,371 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:39,391 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:39,391 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:39,654 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/panko) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/panko", "mode": "0755", "owner": "root", "path": "/var/log/containers/panko", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:39,938 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/panko-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/panko-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:39,962 p=11115 u=mistral | TASK [panko logs readme] ******************************************************* >2018-06-22 04:51:40,017 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:40,031 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:40,536 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "903397bbd82e9b1f53087e3d7e8975d851857ce2", "msg": "Destination directory /var/log/panko does not exist"} >2018-06-22 04:51:40,536 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:40,558 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:40,609 p=11115 u=mistral | skipping: [compute-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:40,610 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:40,623 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:40,627 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:40,885 p=11115 u=mistral | ok: [controller-0] => (item=/var/lib/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/lib/rabbitmq", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:41,176 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/log/containers/rabbitmq", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:41,199 p=11115 u=mistral | TASK [rabbitmq logs readme] **************************************************** >2018-06-22 04:51:41,251 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:41,264 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:41,779 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ee241f2199f264c9d0f384cf389fe255e8bf8a77", "msg": "Destination directory /var/log/rabbitmq does not exist"} >2018-06-22 04:51:41,780 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:41,802 p=11115 u=mistral | TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] *** >2018-06-22 04:51:41,857 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:41,870 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:42,157 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf\n echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done", "delta": "0:00:00.022457", "end": "2018-06-22 04:51:42.164780", "rc": 0, "start": "2018-06-22 04:51:42.142323", "stderr": "/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory\n/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "stderr_lines": ["/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory"], "stdout": "", "stdout_lines": []} >2018-06-22 04:51:42,179 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:42,234 p=11115 u=mistral | skipping: [compute-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:42,235 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:42,236 p=11115 u=mistral | skipping: [compute-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:42,258 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:42,263 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:42,264 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:42,512 p=11115 u=mistral | ok: [controller-0] => (item=/var/lib/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/lib/redis", "mode": "0750", "owner": "redis", "path": "/var/lib/redis", "secontext": "system_u:object_r:redis_var_lib_t:s0", "size": 6, "state": "directory", "uid": 992} >2018-06-22 04:51:42,802 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers/redis) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/redis", "mode": "0755", "owner": "root", "path": "/var/log/containers/redis", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:43,094 p=11115 u=mistral | ok: [controller-0] => (item=/var/run/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/run/redis", "mode": "0755", "owner": "redis", "path": "/var/run/redis", "secontext": "system_u:object_r:redis_var_run_t:s0", "size": 40, "state": "directory", "uid": 992} >2018-06-22 04:51:43,117 p=11115 u=mistral | TASK [redis logs readme] ******************************************************* >2018-06-22 04:51:43,169 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:43,180 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:43,651 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42d03af8abf93e87fdb3fc69702638fc81d943fb", "dest": "/var/log/redis/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/redis/readme.txt", "secontext": "system_u:object_r:redis_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-06-22 04:51:43,674 p=11115 u=mistral | TASK [create /var/lib/sahara] ************************************************** >2018-06-22 04:51:43,730 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:43,746 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:44,009 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/sahara", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:44,031 p=11115 u=mistral | TASK [create persistent sahara logs directory] ********************************* >2018-06-22 04:51:44,087 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:44,100 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:44,347 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/sahara", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:44,370 p=11115 u=mistral | TASK [sahara logs readme] ****************************************************** >2018-06-22 04:51:44,422 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:44,433 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:44,919 p=11115 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b0212a1177fa4a88502d17a1cbc31198040cf047", "msg": "Destination directory /var/log/sahara does not exist"} >2018-06-22 04:51:44,920 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:44,941 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:44,994 p=11115 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:44,995 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,009 p=11115 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,013 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,264 p=11115 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-06-22 04:51:45,535 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-06-22 04:51:45,559 p=11115 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-06-22 04:51:45,611 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,624 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,877 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "dest": "/var/log/containers/swift", "gid": 0, "group": "root", "mode": "0777", "owner": "root", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 14, "src": "/var/log/swift", "state": "link", "uid": 0} >2018-06-22 04:51:45,902 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:45,959 p=11115 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,960 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,960 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,971 p=11115 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,974 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:45,980 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:46,219 p=11115 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-06-22 04:51:46,493 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-06-22 04:51:46,772 p=11115 u=mistral | ok: [controller-0] => (item=/var/log/containers) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers", "mode": "0755", "owner": "root", "path": "/var/log/containers", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 261, "state": "directory", "uid": 0} >2018-06-22 04:51:46,797 p=11115 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-06-22 04:51:46,851 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:46,852 p=11115 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_use_local_disks": true}, "changed": false} >2018-06-22 04:51:46,863 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:46,884 p=11115 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-06-22 04:51:46,937 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:46,950 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:47,194 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/srv/node/d1", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:47,217 p=11115 u=mistral | TASK [swift logs readme] ******************************************************* >2018-06-22 04:51:47,270 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:47,282 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:47,721 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42510a6de124722d6efbc2b1bb038bfe97e5b6d3", "dest": "/var/log/swift/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/swift/readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 116, "state": "file", "uid": 0} >2018-06-22 04:51:47,743 p=11115 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-06-22 04:51:47,828 p=11115 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-06-22 04:51:47,909 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:47,937 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:47,975 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:48,282 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:48,306 p=11115 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-06-22 04:51:48,332 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:48,372 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:48,973 p=11115 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-06-22 04:51:48,973 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:48,997 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:49,027 p=11115 u=mistral | skipping: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:49,073 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:49,417 p=11115 u=mistral | ok: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:49,441 p=11115 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-06-22 04:51:49,468 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:49,510 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:50,104 p=11115 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-06-22 04:51:50,104 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:50,126 p=11115 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-22 04:51:50,154 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:50,195 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:50,497 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"atime": 1529657434.294984, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1529433183.0936344, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 5335882, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "18446744072695807771", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-22 04:51:50,520 p=11115 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-22 04:51:50,548 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:50,584 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:50,892 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "-.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "shutdown.target iscsid.service sockets.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-22 04:51:50,913 p=11115 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 04:51:50,940 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:50,977 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:51,273 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:51,295 p=11115 u=mistral | TASK [nova logs readme] ******************************************************** >2018-06-22 04:51:51,321 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:51,359 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:51,888 p=11115 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-06-22 04:51:51,888 p=11115 u=mistral | ...ignoring >2018-06-22 04:51:51,911 p=11115 u=mistral | TASK [Mount Nova NFS Share] **************************************************** >2018-06-22 04:51:51,938 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:51,965 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:51,976 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:51,996 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:52,023 p=11115 u=mistral | skipping: [controller-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:52,024 p=11115 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:52,064 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:52,067 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:52,357 p=11115 u=mistral | ok: [compute-0] => (item=/var/lib/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/nova", "mode": "0755", "owner": "root", "path": "/var/lib/nova", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:52,647 p=11115 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-06-22 04:51:52,669 p=11115 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-22 04:51:52,695 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:52,733 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,023 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:53,046 p=11115 u=mistral | TASK [is Instance HA enabled] ************************************************** >2018-06-22 04:51:53,074 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,108 p=11115 u=mistral | ok: [compute-0] => {"ansible_facts": {"instance_ha_enabled": false}, "changed": false} >2018-06-22 04:51:53,110 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,130 p=11115 u=mistral | TASK [prepare Instance HA script directory] ************************************ >2018-06-22 04:51:53,157 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,181 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,193 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,213 p=11115 u=mistral | TASK [install Instance HA script that runs nova-compute] *********************** >2018-06-22 04:51:53,239 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,265 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,275 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,297 p=11115 u=mistral | TASK [Get list of instance HA compute nodes] *********************************** >2018-06-22 04:51:53,324 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,347 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,359 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,379 p=11115 u=mistral | TASK [If instance HA is enabled on the node activate the evacuation completed check] *** >2018-06-22 04:51:53,405 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,428 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,441 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,461 p=11115 u=mistral | TASK [create libvirt persistent data directories] ****************************** >2018-06-22 04:51:53,489 p=11115 u=mistral | skipping: [controller-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,513 p=11115 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,514 p=11115 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,514 p=11115 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,515 p=11115 u=mistral | skipping: [controller-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,532 p=11115 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,536 p=11115 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,542 p=11115 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,550 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,551 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:53,810 p=11115 u=mistral | ok: [compute-0] => (item=/etc/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt", "mode": "0700", "owner": "root", "path": "/etc/libvirt", "secontext": "system_u:object_r:virt_etc_t:s0", "size": 215, "state": "directory", "uid": 0} >2018-06-22 04:51:54,107 p=11115 u=mistral | ok: [compute-0] => (item=/etc/libvirt/secrets) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/secrets", "mode": "0700", "owner": "root", "path": "/etc/libvirt/secrets", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:54,403 p=11115 u=mistral | ok: [compute-0] => (item=/etc/libvirt/qemu) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/qemu", "mode": "0700", "owner": "root", "path": "/etc/libvirt/qemu", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 22, "state": "directory", "uid": 0} >2018-06-22 04:51:54,695 p=11115 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-06-22 04:51:54,988 p=11115 u=mistral | ok: [compute-0] => (item=/var/log/containers/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/libvirt", "mode": "0755", "owner": "root", "path": "/var/log/containers/libvirt", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:51:55,012 p=11115 u=mistral | TASK [ensure qemu group is present on the host] ******************************** >2018-06-22 04:51:55,038 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:55,076 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:55,537 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "name": "qemu", "state": "present", "system": false} >2018-06-22 04:51:55,559 p=11115 u=mistral | TASK [ensure qemu user is present on the host] ********************************* >2018-06-22 04:51:55,586 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:55,624 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:56,123 p=11115 u=mistral | ok: [compute-0] => {"append": false, "changed": false, "comment": "qemu user", "group": 107, "home": "/", "move_home": false, "name": "qemu", "shell": "/sbin/nologin", "state": "present", "uid": 107} >2018-06-22 04:51:56,146 p=11115 u=mistral | TASK [create directory for vhost-user sockets with qemu ownership] ************* >2018-06-22 04:51:56,176 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:56,215 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:56,501 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "group": "qemu", "mode": "0755", "owner": "qemu", "path": "/var/lib/vhost_sockets", "secontext": "system_u:object_r:virt_cache_t:s0", "size": 6, "state": "directory", "uid": 107} >2018-06-22 04:51:56,526 p=11115 u=mistral | TASK [check if libvirt is installed] ******************************************* >2018-06-22 04:51:56,552 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:56,589 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:56,899 p=11115 u=mistral | [WARNING]: Consider using the yum, dnf or zypper module rather than running >rpm. If you need to use command because yum, dnf or zypper is insufficient you >can add warn=False to this command task or set command_warnings=False in >ansible.cfg to get rid of this message. > >2018-06-22 04:51:56,899 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/usr/bin/rpm", "-q", "libvirt-daemon"], "delta": "0:00:00.031761", "end": "2018-06-22 04:51:56.904094", "failed_when_result": false, "rc": 0, "start": "2018-06-22 04:51:56.872333", "stderr": "", "stderr_lines": [], "stdout": "libvirt-daemon-3.9.0-14.el7_5.5.x86_64", "stdout_lines": ["libvirt-daemon-3.9.0-14.el7_5.5.x86_64"]} >2018-06-22 04:51:56,922 p=11115 u=mistral | TASK [make sure libvirt services are disabled] ********************************* >2018-06-22 04:51:56,950 p=11115 u=mistral | skipping: [controller-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:56,951 p=11115 u=mistral | skipping: [controller-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:56,998 p=11115 u=mistral | skipping: [ceph-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,000 p=11115 u=mistral | skipping: [ceph-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,290 p=11115 u=mistral | ok: [compute-0] => (item=libvirtd.service) => {"changed": false, "enabled": false, "item": "libvirtd.service", "name": "libvirtd.service", "state": "stopped", "status": {"ActiveEnterTimestamp": "Fri 2018-06-22 04:45:51 EDT", "ActiveEnterTimestampMonotonic": "5023114", "ActiveExitTimestamp": "Fri 2018-06-22 04:50:37 EDT", "ActiveExitTimestampMonotonic": "291337217", "ActiveState": "inactive", "After": "remote-fs.target system.slice local-fs.target basic.target apparmor.service virtlogd.socket virtlogd.service virtlockd.socket iscsid.service virtlockd.service network.target systemd-journald.socket dbus.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 04:45:51 EDT", "AssertTimestampMonotonic": "4793965", "Before": "shutdown.target libvirt-guests.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 04:45:51 EDT", "ConditionTimestampMonotonic": "4793965", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Virtualization daemon", "DevicePolicy": "auto", "Documentation": "man:libvirtd(8) https://libvirt.org", "EnvironmentFile": "/etc/sysconfig/libvirtd (ignore_errors=yes)", "ExecMainCode": "1", "ExecMainExitTimestamp": "Fri 2018-06-22 04:50:37 EDT", "ExecMainExitTimestampMonotonic": "291344802", "ExecMainPID": "1159", "ExecMainStartTimestamp": "Fri 2018-06-22 04:45:51 EDT", "ExecMainStartTimestampMonotonic": "4795327", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/libvirtd ; argv[]=/usr/sbin/libvirtd $LIBVIRTD_ARGS ; ignore_errors=no ; start_time=[Fri 2018-06-22 04:45:51 EDT] ; stop_time=[Fri 2018-06-22 04:50:37 EDT] ; pid=1159 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/libvirtd.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "libvirtd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Fri 2018-06-22 04:50:37 EDT", "InactiveEnterTimestampMonotonic": "291344882", "InactiveExitTimestamp": "Fri 2018-06-22 04:45:51 EDT", "InactiveExitTimestampMonotonic": "4795373", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "8192", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "libvirtd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "basic.target virtlockd.socket virtlogd.socket", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "32768", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "WantedBy": "libvirt-guests.service", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-22 04:51:57,597 p=11115 u=mistral | ok: [compute-0] => (item=virtlogd.socket) => {"changed": false, "enabled": false, "item": "virtlogd.socket", "name": "virtlogd.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Fri 2018-06-22 04:45:49 EDT", "ActiveEnterTimestampMonotonic": "3202272", "ActiveExitTimestamp": "Fri 2018-06-22 04:50:37 EDT", "ActiveExitTimestampMonotonic": "291503767", "ActiveState": "inactive", "After": "-.slice sysinit.target -.mount", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 04:45:49 EDT", "AssertTimestampMonotonic": "3201122", "Backlog": "128", "Before": "sockets.target shutdown.target libvirtd.service virtlogd.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 04:45:49 EDT", "ConditionTimestampMonotonic": "3201122", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Virtual machine log manager socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "FragmentPath": "/usr/lib/systemd/system/virtlogd.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "virtlogd.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Fri 2018-06-22 04:50:37 EDT", "InactiveEnterTimestampMonotonic": "291503767", "InactiveExitTimestamp": "Fri 2018-06-22 04:45:49 EDT", "InactiveExitTimestampMonotonic": "3202272", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "ListenStream": "/var/run/libvirt/virtlogd-sock", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "virtlogd.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "RequiredBy": "libvirtd.service virtlogd.service", "Requires": "sysinit.target -.mount", "RequiresMountsFor": "/var/run/libvirt/virtlogd-sock", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "virtlogd.service", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-22 04:51:57,623 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:57,651 p=11115 u=mistral | skipping: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,652 p=11115 u=mistral | skipping: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,677 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,678 p=11115 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,690 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,695 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,716 p=11115 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-06-22 04:51:57,742 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,766 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,777 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,798 p=11115 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-22 04:51:57,825 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,852 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,863 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,884 p=11115 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-06-22 04:51:57,913 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,936 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,948 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:57,969 p=11115 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-06-22 04:51:57,995 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,018 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,030 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,050 p=11115 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-06-22 04:51:58,076 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,098 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,110 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,131 p=11115 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-22 04:51:58,157 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,182 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,194 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,215 p=11115 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-22 04:51:58,242 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,266 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,278 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,298 p=11115 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 04:51:58,325 p=11115 u=mistral | skipping: [controller-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,326 p=11115 u=mistral | skipping: [controller-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,351 p=11115 u=mistral | skipping: [controller-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,353 p=11115 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,353 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,354 p=11115 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,363 p=11115 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,368 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,372 p=11115 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,392 p=11115 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-06-22 04:51:58,417 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,437 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,453 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,479 p=11115 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-06-22 04:51:58,549 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,577 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,587 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,610 p=11115 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-06-22 04:51:58,640 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,664 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,676 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,699 p=11115 u=mistral | TASK [swift logs readme] ******************************************************* >2018-06-22 04:51:58,753 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,754 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,763 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:51:58,787 p=11115 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-06-22 04:51:58,872 p=11115 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-06-22 04:51:58,940 p=11115 u=mistral | PLAY [External deployment step 1] ********************************************** >2018-06-22 04:51:58,963 p=11115 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-22 04:51:58,991 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"blacklisted_hostnames": []}, "changed": false} >2018-06-22 04:51:59,009 p=11115 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-22 04:51:59,205 p=11115 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-22 04:51:59,373 p=11115 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/host_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/host_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/host_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-22 04:51:59,536 p=11115 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-22 04:51:59,554 p=11115 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-22 04:52:00,114 p=11115 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "4ee1040a624f492f1e7e8c686e8074367263cce5", "dest": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/inventory.yml", "gid": 985, "group": "mistral", "md5sum": "ea7992497f47ef881fa595084fa971f5", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 527, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529657519.82-183893402628477/source", "state": "file", "uid": 988} >2018-06-22 04:52:00,131 p=11115 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-22 04:52:00,166 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_all": {"ceph_conf_overrides": {"global": {"osd_pool_default_pg_num": 32, "osd_pool_default_pgp_num": 32, "osd_pool_default_size": 1, "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_admin_domain": "default", "rgw_keystone_admin_password": "r4vvqGIopZIGavHfqwBD5EZm2", "rgw_keystone_admin_project": "service", "rgw_keystone_admin_user": "swift", "rgw_keystone_api_version": 3, "rgw_keystone_implicit_tenants": "true", "rgw_keystone_revocation_interval": "0", "rgw_keystone_url": "http://172.17.1.11:5000", "rgw_s3_auth_use_keystone": "true"}}, "ceph_docker_image": "rhceph", "ceph_docker_image_tag": "3-6", "ceph_docker_registry": "192.168.24.1:8787", "ceph_origin": "distro", "ceph_stable": true, "cluster": "ceph", "cluster_network": "172.17.4.0/24", "containerized_deployment": true, "docker": true, "fsid": "53912472-747b-11e8-95a3-5254003d7dcb", "generate_fsid": false, "ip_version": "ipv4", "keys": [{"key": "AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r", "name": "client.openstack", "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"}, {"key": "AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==", "mds_cap": "allow *", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "name": "client.manila", "osd_cap": "allow rw"}, {"key": "AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow rw", "name": "client.radosgw", "osd_cap": "allow rwx"}], "monitor_address_block": "172.17.3.0/24", "ntp_service_enabled": false, "openstack_config": true, "openstack_keys": [{"key": "AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r", "name": "client.openstack", "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"}, {"key": "AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==", "mds_cap": "allow *", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "name": "client.manila", "osd_cap": "allow rw"}, {"key": "AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow rw", "name": "client.radosgw", "osd_cap": "allow rwx"}], "openstack_pools": [{"application": "rbd", "name": "images", "pg_num": 32, "rule_name": ""}, {"application": "openstack_gnocchi", "name": "metrics", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "backups", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "vms", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "volumes", "pg_num": 32, "rule_name": ""}], "pools": [], "public_network": "172.17.3.0/24", "user_config": true}}, "changed": false} >2018-06-22 04:52:00,185 p=11115 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-22 04:52:00,523 p=11115 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "461f759837a8a20a8869c7d556e74e223d4c8f4c", "dest": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars/all.yml", "gid": 985, "group": "mistral", "md5sum": "9a790ba48384b2442a5ad84fdd98deb6", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 3030, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529657520.22-244388231262684/source", "state": "file", "uid": 988} >2018-06-22 04:52:00,540 p=11115 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-22 04:52:00,571 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_extra_vars": {"fetch_directory": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir", "ireallymeanit": "yes"}}, "changed": false} >2018-06-22 04:52:00,590 p=11115 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-22 04:52:00,925 p=11115 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "3083c1e649f767be31c28ad17dcb3504b2600fb0", "dest": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/extra_vars.yml", "gid": 985, "group": "mistral", "md5sum": "70b3bbf3c41823fc0c09425dc4c08267", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 115, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529657520.62-118366741346294/source", "state": "file", "uid": 988} >2018-06-22 04:52:00,942 p=11115 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-22 04:52:01,279 p=11115 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "0ed9243967d775f1d706f954c81c53dbea91f151", "dest": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/nodes_uuid_playbook.yml", "gid": 985, "group": "mistral", "md5sum": "afa7e006582a1713f57c3de7724c9f39", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 157, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529657520.97-189018815889999/source", "state": "file", "uid": 988} >2018-06-22 04:52:01,295 p=11115 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-22 04:52:01,312 p=11115 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:01,327 p=11115 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-22 04:52:01,345 p=11115 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:01,360 p=11115 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-22 04:52:01,378 p=11115 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-22 04:52:01,394 p=11115 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-22 04:52:01,421 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mgrs": {"ceph_mgr_docker_extra_env": "-e MGR_DASHBOARD=0"}}, "changed": false} >2018-06-22 04:52:01,438 p=11115 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-22 04:52:01,759 p=11115 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "06d130f3471f2ac09bb0161450878cf64bafd8af", "dest": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars/mgrs.yml", "gid": 985, "group": "mistral", "md5sum": "0d3c03a4186ad82120a728e0470a49d9", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 46, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529657521.47-108859161226761/source", "state": "file", "uid": 988} >2018-06-22 04:52:01,777 p=11115 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-22 04:52:01,808 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mons": {"admin_secret": "AQB2NypbAAAAABAADYq0x/U/g/5X5IHsGSXANQ==", "monitor_secret": "AQB2NypbAAAAABAA67vSeiofLzzYgrjDnmeGYg=="}}, "changed": false} >2018-06-22 04:52:01,827 p=11115 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-22 04:52:02,151 p=11115 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "719e0f5af2a6bb3f7c520087bffa8e6653fc9cbd", "dest": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars/mons.yml", "gid": 985, "group": "mistral", "md5sum": "6826ff7a84879618ddc5f5704567757d", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 112, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529657521.85-280167245012123/source", "state": "file", "uid": 988} >2018-06-22 04:52:02,168 p=11115 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-22 04:52:02,195 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_clients": {}}, "changed": false} >2018-06-22 04:52:02,212 p=11115 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-22 04:52:02,536 p=11115 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars/clients.yml", "gid": 985, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529657522.24-275196831816823/source", "state": "file", "uid": 988} >2018-06-22 04:52:02,553 p=11115 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-22 04:52:02,584 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_osds": {"devices": ["/dev/vdb"], "journal_size": 512, "osd_objectstore": "filestore", "osd_scenario": "collocated"}}, "changed": false} >2018-06-22 04:52:02,600 p=11115 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-22 04:52:02,927 p=11115 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "454c7fd1ab87fd8f8ec07c9874039814cbe681cf", "dest": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars/osds.yml", "gid": 985, "group": "mistral", "md5sum": "e03a30f138554d36c1743c14fd3d9357", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 90, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529657522.63-169755058061829/source", "state": "file", "uid": 988} >2018-06-22 04:52:02,934 p=11115 u=mistral | PLAY [Overcloud deploy step tasks for 1] *************************************** >2018-06-22 04:52:02,963 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:52:03,019 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:03,030 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:03,098 p=11115 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-22 04:52:03,595 p=11115 u=mistral | changed: [controller-0] => {"changed": true} >2018-06-22 04:52:03,619 p=11115 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-22 04:52:04,262 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-22 04:52:04,285 p=11115 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-22 04:52:04,625 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:04,649 p=11115 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-22 04:52:05,154 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-22 04:52:05,175 p=11115 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-22 04:52:05,662 p=11115 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 04:52:05,683 p=11115 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-22 04:52:06,024 p=11115 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-22 04:52:06,046 p=11115 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-22 04:52:06,389 p=11115 u=mistral | changed: [controller-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:06,415 p=11115 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-22 04:52:07,028 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657526.46-201388305936685/source", "state": "file", "uid": 0} >2018-06-22 04:52:07,049 p=11115 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-22 04:52:07,382 p=11115 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 04:52:07,404 p=11115 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-22 04:52:07,742 p=11115 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 04:52:07,766 p=11115 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-22 04:52:08,112 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-22 04:52:08,135 p=11115 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-22 04:52:08,157 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:08,179 p=11115 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-22 04:52:08,584 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "name": null, "status": {}} >2018-06-22 04:52:08,607 p=11115 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-22 04:52:10,336 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "systemd-journald.socket registries.service docker-storage-setup.service basic.target system.slice network.target rhel-push-plugin.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer registries.service basic.target rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-22 04:52:10,359 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:52:10,390 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:10,427 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:10,469 p=11115 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-22 04:52:10,858 p=11115 u=mistral | changed: [compute-0] => {"changed": true} >2018-06-22 04:52:10,877 p=11115 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-22 04:52:11,542 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-22 04:52:11,559 p=11115 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-22 04:52:11,898 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:11,917 p=11115 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-22 04:52:12,269 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-22 04:52:12,285 p=11115 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-22 04:52:12,630 p=11115 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 04:52:12,648 p=11115 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-22 04:52:12,995 p=11115 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-22 04:52:13,013 p=11115 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-22 04:52:13,368 p=11115 u=mistral | changed: [compute-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:13,393 p=11115 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-22 04:52:14,026 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657533.43-128423886192900/source", "state": "file", "uid": 0} >2018-06-22 04:52:14,044 p=11115 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-22 04:52:14,393 p=11115 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 04:52:14,409 p=11115 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-22 04:52:14,761 p=11115 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 04:52:14,778 p=11115 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-22 04:52:15,128 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-22 04:52:15,149 p=11115 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-22 04:52:15,170 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:15,188 p=11115 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-22 04:52:15,590 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "name": null, "status": {}} >2018-06-22 04:52:15,608 p=11115 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-22 04:52:17,320 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "rhel-push-plugin.socket network.target systemd-journald.socket system.slice basic.target docker-storage-setup.service registries.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "rhel-push-plugin.socket registries.service docker-cleanup.timer basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-22 04:52:17,344 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:52:17,370 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:17,395 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:17,407 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:17,429 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:52:17,456 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:17,480 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:17,491 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:17,513 p=11115 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 04:52:17,538 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:17,561 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:17,616 p=11115 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-22 04:52:17,941 p=11115 u=mistral | changed: [ceph-0] => {"changed": true} >2018-06-22 04:52:17,959 p=11115 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-22 04:52:18,572 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-22 04:52:18,589 p=11115 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-22 04:52:18,919 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:18,939 p=11115 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-22 04:52:19,274 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-22 04:52:19,290 p=11115 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-22 04:52:19,629 p=11115 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 04:52:19,647 p=11115 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-22 04:52:19,984 p=11115 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-22 04:52:20,001 p=11115 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-22 04:52:20,337 p=11115 u=mistral | changed: [ceph-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:20,366 p=11115 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-22 04:52:20,924 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657540.41-131950415836257/source", "state": "file", "uid": 0} >2018-06-22 04:52:20,942 p=11115 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-22 04:52:21,255 p=11115 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 04:52:21,272 p=11115 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-22 04:52:21,587 p=11115 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 04:52:21,604 p=11115 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-22 04:52:21,915 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-22 04:52:21,934 p=11115 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-22 04:52:21,956 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:52:21,974 p=11115 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-22 04:52:22,362 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "name": null, "status": {}} >2018-06-22 04:52:22,381 p=11115 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-22 04:52:24,110 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "rhel-push-plugin.socket system.slice network.target registries.service docker-storage-setup.service systemd-journald.socket basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14904", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target registries.service rhel-push-plugin.socket docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-22 04:52:24,111 p=11115 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-06-22 04:52:26,826 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-06-22 04:52:10 EDT", "ActiveEnterTimestampMonotonic": "387084796", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "rhel-push-plugin.socket systemd-journald.socket docker-storage-setup.service basic.target registries.service system.slice network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 04:52:09 EDT", "AssertTimestampMonotonic": "385899779", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 04:52:09 EDT", "ConditionTimestampMonotonic": "385899779", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "25595", "ExecMainStartTimestamp": "Fri 2018-06-22 04:52:09 EDT", "ExecMainStartTimestampMonotonic": "385901392", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-06-22 04:52:09 EDT] ; stop_time=[n/a] ; pid=25595 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-06-22 04:52:09 EDT", "InactiveExitTimestampMonotonic": "385901426", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "25595", "MemoryAccounting": "no", "MemoryCurrent": "67579904", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service basic.target rhel-push-plugin.socket docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "23", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Fri 2018-06-22 04:52:10 EDT", "WatchdogTimestampMonotonic": "387084739", "WatchdogUSec": "0"}} >2018-06-22 04:52:26,840 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-06-22 04:52:17 EDT", "ActiveEnterTimestampMonotonic": "390898100", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket system.slice network.target registries.service rhel-push-plugin.socket basic.target docker-storage-setup.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 04:52:16 EDT", "AssertTimestampMonotonic": "389723690", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 04:52:16 EDT", "ConditionTimestampMonotonic": "389723690", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "19364", "ExecMainStartTimestamp": "Fri 2018-06-22 04:52:16 EDT", "ExecMainStartTimestampMonotonic": "389725106", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-06-22 04:52:16 EDT] ; stop_time=[n/a] ; pid=19364 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-06-22 04:52:16 EDT", "InactiveExitTimestampMonotonic": "389725146", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "19364", "MemoryAccounting": "no", "MemoryCurrent": "65433600", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer basic.target registries.service rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "20", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Fri 2018-06-22 04:52:17 EDT", "WatchdogTimestampMonotonic": "390898055", "WatchdogUSec": "0"}} >2018-06-22 04:52:26,856 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-06-22 04:52:24 EDT", "ActiveEnterTimestampMonotonic": "393775554", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "docker-storage-setup.service systemd-journald.socket basic.target rhel-push-plugin.socket system.slice registries.service network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 04:52:22 EDT", "AssertTimestampMonotonic": "392559814", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 04:52:22 EDT", "ConditionTimestampMonotonic": "392559813", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "16591", "ExecMainStartTimestamp": "Fri 2018-06-22 04:52:22 EDT", "ExecMainStartTimestampMonotonic": "392560793", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-06-22 04:52:22 EDT] ; stop_time=[n/a] ; pid=16591 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-06-22 04:52:22 EDT", "InactiveExitTimestampMonotonic": "392560821", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14904", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "16591", "MemoryAccounting": "no", "MemoryCurrent": "60604416", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer rhel-push-plugin.socket basic.target registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "16", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Fri 2018-06-22 04:52:24 EDT", "WatchdogTimestampMonotonic": "393775508", "WatchdogUSec": "0"}} >2018-06-22 04:52:26,862 p=11115 u=mistral | PLAY [Overcloud common deploy step tasks 1] ************************************ >2018-06-22 04:52:26,890 p=11115 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-22 04:52:27,350 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:27,371 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:27,422 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:27,444 p=11115 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-22 04:52:28,150 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "be3cadf4421fbe374d33f269513ff6e3f1c7ab66", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "86461fb932aeaba90516617c8168d5f2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1576, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657547.51-228444146660564/source", "state": "file", "uid": 0} >2018-06-22 04:52:28,167 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "149113e83b0cb4d05192576bcff7b6fc0f316bd0", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "66bedc7c4ccee7cb079b118c09f8c08c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1630, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657547.49-74238627505806/source", "state": "file", "uid": 0} >2018-06-22 04:52:28,205 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f8a32eb42203ada5e675fbde141df7f32100af5c", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "c727dc3c35ede89e7c3d894e3fb81da4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1588, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657547.56-180019045426888/source", "state": "file", "uid": 0} >2018-06-22 04:52:28,228 p=11115 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-22 04:52:28,640 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-22 04:52:28,643 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-22 04:52:28,690 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-22 04:52:28,712 p=11115 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-22 04:52:29,424 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "09cb610f7fea36dc33be3297b42ac38af987732e", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "e806efb887de6e5795dea0490c302e84", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2288, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657548.79-128930564743710/source", "state": "file", "uid": 0} >2018-06-22 04:52:29,435 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "c5bc7cf017025a018ebda9dd2ad6aac290a51bef", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "b53dfdbc008416d050550640e4219f21", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 13304, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657548.79-224839922168458/source", "state": "file", "uid": 0} >2018-06-22 04:52:29,447 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "c8d0c143121b7904490da6698d68f76bf1957b51", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "c6d9b1246ac65ebadc18213639c2431d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 234, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657548.8-73715868130314/source", "state": "file", "uid": 0} >2018-06-22 04:52:29,469 p=11115 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-22 04:52:29,849 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:29,862 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:29,880 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:29,904 p=11115 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-22 04:52:30,279 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-22 04:52:30,306 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-22 04:52:30,322 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-22 04:52:30,344 p=11115 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-22 04:52:31,016 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657550.43-80942269500927/source", "state": "file", "uid": 0} >2018-06-22 04:52:31,037 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": true, "checksum": "4e350e3d48cba294f2ccab34eb03c1dee23e7f82", "dest": "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "md5sum": "ed5dca102b28b4f992943612dee2dced", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657550.42-95441385541701/source", "state": "file", "uid": 0} >2018-06-22 04:52:31,643 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": true, "checksum": "e77b96beec241bb84928d298a778521376225c0d", "dest": "/var/lib/docker-config-scripts/create_swift_secret.sh", "gid": 0, "group": "root", "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "md5sum": "9277d70c2fd62961998c5fce0a8aeee2", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1125, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657551.06-170520068754896/source", "state": "file", "uid": 0} >2018-06-22 04:52:32,242 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657551.67-158821863523471/source", "state": "file", "uid": 0} >2018-06-22 04:52:32,844 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': 'set_swift_keymaster_key_id.sh'}) => {"changed": true, "checksum": "9c2474fa6e4a8869674b689206eb1a1658a28fc6", "dest": "/var/lib/docker-config-scripts/set_swift_keymaster_key_id.sh", "gid": 0, "group": "root", "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "md5sum": "054225f8957e4457ef2103ce24d44b04", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1275, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657552.27-33873464025994/source", "state": "file", "uid": 0} >2018-06-22 04:52:33,453 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': 'docker_puppet_apply.sh'}) => {"changed": true, "checksum": "93afaa6df42c9ead7768b295fa901f83ae1b39ef", "dest": "/var/lib/docker-config-scripts/docker_puppet_apply.sh", "gid": 0, "group": "root", "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "md5sum": "709b2caef95cc7486f9b851414e71133", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 653, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657552.87-80574603799778/source", "state": "file", "uid": 0} >2018-06-22 04:52:34,056 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": true, "checksum": "0a839197c2fa15204014befb1c771a17aea5bdd1", "dest": "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "md5sum": "12a4a82656ddaae342942097b752d9db", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 442, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657553.48-145687399029183/source", "state": "file", "uid": 0} >2018-06-22 04:52:34,079 p=11115 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-22 04:52:34,134 p=11115 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,148 p=11115 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,156 p=11115 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,156 p=11115 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,156 p=11115 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,158 p=11115 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,159 p=11115 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,166 p=11115 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,166 p=11115 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,172 p=11115 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,174 p=11115 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,185 p=11115 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,185 p=11115 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,187 p=11115 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,193 p=11115 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,195 p=11115 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,201 p=11115 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,210 p=11115 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,232 p=11115 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-22 04:52:34,333 p=11115 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,354 p=11115 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,767 p=11115 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:52:34,788 p=11115 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-22 04:52:35,459 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ce9bc1dccca0cdcaa3098c1a790d78a8c694a5a4", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "ccd9b33a462e8e1243e2dc1f30301019", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1055, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657554.88-45904156876608/source", "state": "file", "uid": 0} >2018-06-22 04:52:35,496 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "8fee322f0ef2128c81834c00b289fc173c9e5d38", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "594ef7e62130ed34321014d3e001121f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 105573, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657554.83-23614368071343/source", "state": "file", "uid": 0} >2018-06-22 04:52:35,509 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "ea8622945980cce2aa6f6a0ec285f28fef454eb3", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "6a2e3c98b99c4f234941b76485bb3f0e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 11909, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657554.85-172698555282084/source", "state": "file", "uid": 0} >2018-06-22 04:52:35,533 p=11115 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-22 04:52:36,237 p=11115 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657555.62-47583045333424/source", "state": "file", "uid": 0} >2018-06-22 04:52:36,239 p=11115 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657555.6-69607538301518/source", "state": "file", "uid": 0} >2018-06-22 04:52:36,266 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL', u'DB_ROOT_PASSWORD=zeHIZe0ICg'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": true, "checksum": "6ed04ef67fe6d8f97037e1cd69a5309ba391ac53", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL", "DB_ROOT_PASSWORD=zeHIZe0ICg"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "md5sum": "04ad0163fb197eeb581f7e65b7213dab", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 7434, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657555.61-15419225823746/source", "state": "file", "uid": 0} >2018-06-22 04:52:36,857 p=11115 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657556.24-11711017261985/source", "state": "file", "uid": 0} >2018-06-22 04:52:36,874 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": true, "checksum": "7410b402d81937d9a195a3bf5e8207fa09cdb6e0", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "md5sum": "57cce5acf78ba9c384000a575f958249", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 5050, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657556.24-104880373470768/source", "state": "file", "uid": 0} >2018-06-22 04:52:36,924 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'6CLNy5Ewot5UhcBYmt27oGDMD'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": true, "checksum": "16f70a31b7af2c706e6f92cce58994006ac0aab9", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "6CLNy5Ewot5UhcBYmt27oGDMD"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "md5sum": "96751e80b3a4c2d2ff5e757c69bbd0f1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21820, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657556.27-21885112362007/source", "state": "file", "uid": 0} >2018-06-22 04:52:37,474 p=11115 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657556.87-272149369032588/source", "state": "file", "uid": 0} >2018-06-22 04:52:37,511 p=11115 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657556.88-57717234619550/source", "state": "file", "uid": 0} >2018-06-22 04:52:37,554 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529656667'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529656667'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529656667'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529656667'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": true, "checksum": "de164361e49617ea93b913b22dad010e86d2265c", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529656667"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529656667"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529656667"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529656667"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "md5sum": "c98536cb2495aad352f9ec5240ed5588", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 17318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657556.92-264049278561827/source", "state": "file", "uid": 0} >2018-06-22 04:52:38,104 p=11115 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657557.48-139800202866725/source", "state": "file", "uid": 0} >2018-06-22 04:52:38,161 p=11115 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657557.52-160490117662498/source", "state": "file", "uid": 0} >2018-06-22 04:52:38,194 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529656667'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529656667'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529656667'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": true, "checksum": "b059bde8e5da52d05db7e457fbf3c02b8a90b7ef", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529656667"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529656667"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529656667"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "md5sum": "97c4b621a9490421a93b445b7ba9f421", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10552, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657557.56-254034765883553/source", "state": "file", "uid": 0} >2018-06-22 04:52:38,719 p=11115 u=mistral | changed: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "8acd94aee3f5b5403e8fb7f16593594f245dafee", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "md5sum": "2aaa44b365bea28e18d96f2f17bef412", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 973, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657558.11-1713506155142/source", "state": "file", "uid": 0} >2018-06-22 04:52:38,815 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "0d417e60cd9c4b580b8889ca2b34ab7a7cd1c84e", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "md5sum": "43f4c7750111fb2e9d00b850149a8ce7", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6779, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657558.17-273286466387121/source", "state": "file", "uid": 0} >2018-06-22 04:52:38,859 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "a1be6aa2d4cc45e104b7c75319745196e636d5d2", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "md5sum": "1f138d32563935823e0ae333e7382fb3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 48375, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657558.2-7955623552793/source", "state": "file", "uid": 0} >2018-06-22 04:52:39,356 p=11115 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657558.73-80778970016239/source", "state": "file", "uid": 0} >2018-06-22 04:52:39,450 p=11115 u=mistral | changed: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657558.84-143569703232188/source", "state": "file", "uid": 0} >2018-06-22 04:52:39,623 p=11115 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657558.82-143147435759400/source", "state": "file", "uid": 0} >2018-06-22 04:52:39,651 p=11115 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-22 04:52:40,048 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:40,052 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:40,083 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 04:52:40,106 p=11115 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-22 04:52:40,842 p=11115 u=mistral | changed: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657560.21-80976344699651/source", "state": "file", "uid": 0} >2018-06-22 04:52:40,855 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657560.2-111582234899248/source", "state": "file", "uid": 0} >2018-06-22 04:52:40,995 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657560.35-19581203519629/source", "state": "file", "uid": 0} >2018-06-22 04:52:41,521 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657560.86-279896088125236/source", "state": "file", "uid": 0} >2018-06-22 04:52:41,628 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/keystone.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657561.0-3764547847514/source", "state": "file", "uid": 0} >2018-06-22 04:52:42,170 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": true, "checksum": "b50cbe1f8b020aa49249248b57310f45005813b3", "dest": "/var/lib/kolla/config_files/nova_libvirt.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "8356787bbcfcb5674a0bf2570719654a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 512, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657561.53-170441738743425/source", "state": "file", "uid": 0} >2018-06-22 04:52:42,269 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": true, "checksum": "0e697e31bdc439b99552bac9ffe0bab07f2af4a4", "dest": "/var/lib/kolla/config_files/cinder_backup.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "8e107eb8f6989be8375a0ff2dd5b4d57", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657561.64-18381507220017/source", "state": "file", "uid": 0} >2018-06-22 04:52:42,852 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": true, "checksum": "6a0a936a324363cd605e22c2327c17deb6dfbec2", "dest": "/var/lib/kolla/config_files/nova-migration-target.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "md5sum": "161558d57b182ca70c6f9bbd7fcbda8a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 258, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657562.18-20758154051170/source", "state": "file", "uid": 0} >2018-06-22 04:52:42,922 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657562.28-172666863931640/source", "state": "file", "uid": 0} >2018-06-22 04:52:43,515 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": true, "checksum": "8bbfe195e54ddfe481aaad9744174f7344d49681", "dest": "/var/lib/kolla/config_files/nova_virtlogd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "786b962e2df778e3ce02b185ef93deac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 193, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657562.86-213904998811601/source", "state": "file", "uid": 0} >2018-06-22 04:52:43,575 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": true, "checksum": "413730fbf3f7935085cfda60cbc1535d8bce0caf", "dest": "/var/lib/kolla/config_files/swift_account_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "dfccd947a56ceb6fa2b71c400281a365", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657562.93-3139049866187/source", "state": "file", "uid": 0} >2018-06-22 04:52:44,163 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657563.52-225803000016047/source", "state": "file", "uid": 0} >2018-06-22 04:52:44,204 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": true, "checksum": "2bf5ca66cb377c9fa3e6880f8b078d1312470cde", "dest": "/var/lib/kolla/config_files/swift_account_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d4a857b7e18f40f1cc1e6fd265c89770", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657563.58-198220929508847/source", "state": "file", "uid": 0} >2018-06-22 04:52:44,790 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": true, "checksum": "bb1c3bcd199b74791ea32746c08f4925a3b585a2", "dest": "/var/lib/kolla/config_files/nova_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "70b809037933259f45bb1585e9e6a4cc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 643, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657564.17-72964841516987/source", "state": "file", "uid": 0} >2018-06-22 04:52:44,809 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": true, "checksum": "e01d19d7f7cff24dfcc0d132b7d8ceabba199142", "dest": "/var/lib/kolla/config_files/aodh_notifier.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "5d4a748030a9a7476ccbd8902fb654fc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657564.21-99102222818666/source", "state": "file", "uid": 0} >2018-06-22 04:52:45,407 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": true, "checksum": "23416bae23a2c08d2c534f76d19f8c4bad40ee92", "dest": "/var/lib/kolla/config_files/nova_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "d00e4198d95dede3f0b6ac351d57a982", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657564.82-206886539534603/source", "state": "file", "uid": 0} >2018-06-22 04:52:45,427 p=11115 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": true, "checksum": "4b3e97fcd87fd70b35934d1ef908747f302a4d11", "dest": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d91832a36a0ad3616a4e78c1af7d0db5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657564.8-159764155081218/source", "state": "file", "uid": 0} >2018-06-22 04:52:45,940 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": true, "checksum": "a13a92b47f931e2e89d7e4bf5057a4307ab9cd45", "dest": "/var/lib/kolla/config_files/heat_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "e671c4783cc86fb2ad300fcd11b2f99b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657565.41-235242580668215/source", "state": "file", "uid": 0} >2018-06-22 04:52:46,480 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": true, "checksum": "da289f102f641cdd0a02df41c443d7d8387741a5", "dest": "/var/lib/kolla/config_files/neutron_dhcp.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "md5sum": "c5975567082648a9da814c433c49f2d6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 875, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657565.95-255025580649838/source", "state": "file", "uid": 0} >2018-06-22 04:52:47,032 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": true, "checksum": "0801385cb9292b3b6eb8440166435242bd90e288", "dest": "/var/lib/kolla/config_files/haproxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "md5sum": "a2742f7abd50bb0af0a4ba55b2f1f4ff", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 648, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657566.49-6156466634140/source", "state": "file", "uid": 0} >2018-06-22 04:52:47,573 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": true, "checksum": "c1a1552a71f4daefebff5234f9d8ba71f4c64d76", "dest": "/var/lib/kolla/config_files/nova_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "6b8ef057a2e5539eacd9f29fc4b94036", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657567.04-6059626168463/source", "state": "file", "uid": 0} >2018-06-22 04:52:48,123 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": true, "checksum": "a6d2eb62af2f11437c704d13adf72d498324ce2a", "dest": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "d586f0c2ff043bece10efff986d635a3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 531, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657567.58-150534154467225/source", "state": "file", "uid": 0} >2018-06-22 04:52:48,668 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": true, "checksum": "b061cf7478060add5d079aafaeae81b445251a8f", "dest": "/var/lib/kolla/config_files/swift_account_reaper.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "0f3bbe74ca95c8cca321ee32e2aff7d1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657568.13-189764842939711/source", "state": "file", "uid": 0} >2018-06-22 04:52:49,222 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": true, "checksum": "b7397fff831b47db0b6111663d816a64a389cb25", "dest": "/var/lib/kolla/config_files/sahara-engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "ac2c7a84fc46a1f1d128201ce5b67c2d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 360, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657568.67-233183374200431/source", "state": "file", "uid": 0} >2018-06-22 04:52:49,777 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": true, "checksum": "66d6d6bd51aaa0c100cdfc7688267a4342c7859f", "dest": "/var/lib/kolla/config_files/redis.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "md5sum": "ceafff1d742633f8759bdb1af0e3ebd4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 843, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657569.23-220913240910372/source", "state": "file", "uid": 0} >2018-06-22 04:52:50,336 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": true, "checksum": "b64555136537c36af22340fb15f21f0e01ac3495", "dest": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "557a4e9522f54cfbd6456516e67f4971", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 271, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657569.78-120789639929450/source", "state": "file", "uid": 0} >2018-06-22 04:52:50,890 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": true, "checksum": "2a93405ac579e31c6e5732983f3d7dd8bed55b33", "dest": "/var/lib/kolla/config_files/glance_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "30c5fe40dffc304e7edeab4019e96e92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 556, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657570.34-7685249462976/source", "state": "file", "uid": 0} >2018-06-22 04:52:51,438 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": true, "checksum": "739f6562d3ea24561c6d8bcf37041a9eac928257", "dest": "/var/lib/kolla/config_files/swift_container_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b63816c7c08aef58249d13b65b387da6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657570.9-267347808564355/source", "state": "file", "uid": 0} >2018-06-22 04:52:52,004 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": true, "checksum": "98adef088b2ae2648ac88b812890957ec54eff13", "dest": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "md5sum": "4a38c9578181c292891f5f7bdb9f791b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 428, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657571.45-51057493302238/source", "state": "file", "uid": 0} >2018-06-22 04:52:52,585 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": true, "checksum": "ebbb7ee6895cea2b9278f33e888881d3d3f1a68a", "dest": "/var/lib/kolla/config_files/swift_object_expirer.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "e4bf891d8ffc9a015be201a6ef0d5abc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657572.01-126680399325689/source", "state": "file", "uid": 0} >2018-06-22 04:52:53,139 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": true, "checksum": "53d52f7d52f0fb3da33de2c20414eb3248593fdd", "dest": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "2863f917d7ada51e9570fb53bb363eed", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657572.59-38365362903648/source", "state": "file", "uid": 0} >2018-06-22 04:52:53,705 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657573.15-235780208181276/source", "state": "file", "uid": 0} >2018-06-22 04:52:54,276 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": true, "checksum": "44a8f1a58092190d553d3f589cab9ae566f8dc81", "dest": "/var/lib/kolla/config_files/swift_rsync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "886febadf691905adf0c129f3aa0197a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657573.71-58615877851119/source", "state": "file", "uid": 0} >2018-06-22 04:52:54,854 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": true, "checksum": "279b64a7d6914d2a03c86c703f53e3d71b1daef1", "dest": "/var/lib/kolla/config_files/swift_account_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b41d67c146c800142c5405fe5a0b332e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657574.28-211409255370417/source", "state": "file", "uid": 0} >2018-06-22 04:52:55,414 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": true, "checksum": "06055a69fec2bc513b4c86ceb654a5fc29bd0866", "dest": "/var/lib/kolla/config_files/cinder_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "801aba1299d99bfd7e63f66ca7a4ba40", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657574.87-171880411140666/source", "state": "file", "uid": 0} >2018-06-22 04:52:55,943 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": true, "checksum": "a0874b803c5238a4eeb12b1265d5d1db93c0d3d4", "dest": "/var/lib/kolla/config_files/swift_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a38e4e3ae519b3b0824e19184e521b36", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657575.42-170940757053991/source", "state": "file", "uid": 0} >2018-06-22 04:52:56,465 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": true, "checksum": "8dbfc3669a6d79fb30702be502ced7501500480a", "dest": "/var/lib/kolla/config_files/swift_container_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a697319d04392dc572dff6236144571f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657575.95-14713589322340/source", "state": "file", "uid": 0} >2018-06-22 04:52:56,987 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": true, "checksum": "3c87335a28b992f90769aea9ea62fb610f8236f1", "dest": "/var/lib/kolla/config_files/clustercheck.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d74434e7b8bcaca0b227152346c13db8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 165, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657576.47-224794173735198/source", "state": "file", "uid": 0} >2018-06-22 04:52:57,514 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": true, "checksum": "b52f0d28ed1ac134c64994c08b3f2378e8dff494", "dest": "/var/lib/kolla/config_files/mysql.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "md5sum": "4d15ed291dbe96e88b9a128b0e5c99e9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 687, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657576.99-262335370471053/source", "state": "file", "uid": 0} >2018-06-22 04:52:58,051 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_placement.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657577.52-260101088291965/source", "state": "file", "uid": 0} >2018-06-22 04:52:58,572 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": true, "checksum": "fd070eb1bdc97442fddc24f503fe5e3251b89e28", "dest": "/var/lib/kolla/config_files/sahara-api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "bd52668d37c227cc00c418bbe889ab90", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 357, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657578.06-208056749156319/source", "state": "file", "uid": 0} >2018-06-22 04:52:59,104 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": true, "checksum": "f4177197cb07127689ae10a60020efa3a5e0d457", "dest": "/var/lib/kolla/config_files/aodh_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "582326e52a94260e71a4a19dc4d75191", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657578.58-107274950397554/source", "state": "file", "uid": 0} >2018-06-22 04:52:59,667 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": true, "checksum": "815ba71e0584cb12e7d40f794603c6bfb1800626", "dest": "/var/lib/kolla/config_files/keystone_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "md5sum": "b3b3bbd6499e09c424665311a5e66136", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 252, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657579.11-104844530674923/source", "state": "file", "uid": 0} >2018-06-22 04:53:00,228 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657579.67-164209914159369/source", "state": "file", "uid": 0} >2018-06-22 04:53:00,796 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": true, "checksum": "659d25615392d81b2f6bc001067232495de4d6ac", "dest": "/var/lib/kolla/config_files/swift_object_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "cdea8a372a87263d5fc44b482867a705", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 201, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657580.24-29112398894306/source", "state": "file", "uid": 0} >2018-06-22 04:53:01,352 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": true, "checksum": "01a54792c74d0ebd057e8d0f44e6e8e619283e62", "dest": "/var/lib/kolla/config_files/nova_conductor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "ccbba0ad7a926ceca2bf858b8a9cc376", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657580.8-43686797879617/source", "state": "file", "uid": 0} >2018-06-22 04:53:01,913 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api_cfn.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657581.36-209079494874359/source", "state": "file", "uid": 0} >2018-06-22 04:53:02,461 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": true, "checksum": "edb529183cc509ea82818edf4d88e3650b5ffc57", "dest": "/var/lib/kolla/config_files/nova_metadata.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "45129bd8b5b9aef067edb558a9fb2c68", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 249, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657581.92-106605506371830/source", "state": "file", "uid": 0} >2018-06-22 04:53:03,035 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657582.47-185929622872141/source", "state": "file", "uid": 0} >2018-06-22 04:53:03,611 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": true, "checksum": "205ddacf194881a04c54779e3049b3c59ef6c4af", "dest": "/var/lib/kolla/config_files/rabbitmq.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "md5sum": "1097dade2a2355fd51207668004d093d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 792, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657583.04-24684763953684/source", "state": "file", "uid": 0} >2018-06-22 04:53:04,177 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": true, "checksum": "a960878859377dfae6334d9b7eaa9f554ab31798", "dest": "/var/lib/kolla/config_files/nova_consoleauth.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "2a66fc646aae3e5913e0598ccef3881f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 248, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657583.62-199004407649212/source", "state": "file", "uid": 0} >2018-06-22 04:53:04,743 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": true, "checksum": "4f7a34f38afe301f885e25eb10225c461ab1d0b1", "dest": "/var/lib/kolla/config_files/swift_object_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "71a7e788486d505cfec645da0ac337cd", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657584.18-237750162729641/source", "state": "file", "uid": 0} >2018-06-22 04:53:05,313 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": true, "checksum": "5a73d3b7ef652341120c9298683d3a26f3fb668b", "dest": "/var/lib/kolla/config_files/neutron_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "c48346aa3f8c096826ebab378db9dfb9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 549, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657584.75-63469795619827/source", "state": "file", "uid": 0} >2018-06-22 04:53:05,874 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": true, "checksum": "9ec49193a63036ecf32a1479eabdac05dcab06e0", "dest": "/var/lib/kolla/config_files/cinder_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "93e9da0d08550be0ed30576cefdfbfbb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 340, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657585.32-13246978807409/source", "state": "file", "uid": 0} >2018-06-22 04:53:06,431 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": true, "checksum": "c8763a8c16702042afe553b54212340d800e1509", "dest": "/var/lib/kolla/config_files/gnocchi_metricd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "db9bd25aa2fcd2845d442869e986e7d8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 471, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657585.88-238710339852776/source", "state": "file", "uid": 0} >2018-06-22 04:53:06,979 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": true, "checksum": "fe01b9d48d08f239bbf9acf7e2a1492397180c8e", "dest": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "a26f6acfc823d6e2e5b34367b859c8fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 617, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657586.44-228755440000312/source", "state": "file", "uid": 0} >2018-06-22 04:53:07,547 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": true, "checksum": "a418eddca731078cfd8fe2fda7ee64d9ffaf7dda", "dest": "/var/lib/kolla/config_files/swift_container_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "930bbe0f8c13b55f664fb3a89dfa1613", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 207, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657586.99-216725311869729/source", "state": "file", "uid": 0} >2018-06-22 04:53:08,125 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": true, "checksum": "fe3989178a2ea434bae6dfd64b04423e3ea005bc", "dest": "/var/lib/kolla/config_files/heat_engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "aee05ebc54399dde3dfc3577c3431a92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 322, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657587.55-96402640312089/source", "state": "file", "uid": 0} >2018-06-22 04:53:08,696 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657588.13-36214649223707/source", "state": "file", "uid": 0} >2018-06-22 04:53:09,263 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": true, "checksum": "460cdcfbcfac45a30b03df89ac84d2f34db64d72", "dest": "/var/lib/kolla/config_files/swift_object_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "md5sum": "b00c233fd2cd32c68e429e42918b8245", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657588.7-144893641734344/source", "state": "file", "uid": 0} >2018-06-22 04:53:09,840 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": true, "checksum": "80800f9f267aaf3497499af70b7945e3b6ae771b", "dest": "/var/lib/kolla/config_files/redis_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "c45d2764863cc585b994d432412ff9e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 172, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657589.27-262325216755815/source", "state": "file", "uid": 0} >2018-06-22 04:53:10,417 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": true, "checksum": "39f33531116fbcba7a5d9c1cbbc32f4af5e6b981", "dest": "/var/lib/kolla/config_files/gnocchi_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "5e924ffe736d942bf904a791bf5b5af2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 475, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657589.85-235823426524195/source", "state": "file", "uid": 0} >2018-06-22 04:53:10,984 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": true, "checksum": "7f36445e4c6eb403ce919ca3adee771d4cb3bcce", "dest": "/var/lib/kolla/config_files/cinder_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "bb3e2e5741eb3e5b6c53da835e66d00d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657590.43-138061341600476/source", "state": "file", "uid": 0} >2018-06-22 04:53:11,567 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": true, "checksum": "e800a0e1c86f8fa7a41efbf24ce38f48a458ba51", "dest": "/var/lib/kolla/config_files/cinder_volume.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "a85ec43ba623807ac022c04663fa68f5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 579, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657590.99-148756833675178/source", "state": "file", "uid": 0} >2018-06-22 04:53:12,139 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": true, "checksum": "2db8f01174b9c2aa3a180add472b54891aed5cd6", "dest": "/var/lib/kolla/config_files/panko_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "md5sum": "7d9530934c938a4c96f71797957f7ca8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657591.58-193211750847949/source", "state": "file", "uid": 0} >2018-06-22 04:53:12,707 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": true, "checksum": "fbcdad9219733b81ad969426553906c1a8648897", "dest": "/var/lib/kolla/config_files/swift_object_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "45f7348541b64a76aec07477ea1d7358", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657592.15-105532272010915/source", "state": "file", "uid": 0} >2018-06-22 04:53:13,269 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": true, "checksum": "cd233477dc9defd8028ac1a8fe736b8c9fcde9f8", "dest": "/var/lib/kolla/config_files/neutron_l3_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "b47a8dc2601f0e1c404b9009d1c99c32", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 634, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657592.72-37105977843581/source", "state": "file", "uid": 0} >2018-06-22 04:53:13,823 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": true, "checksum": "a7135286aba5eb111dc77c913fc1f7dc0977e783", "dest": "/var/lib/kolla/config_files/aodh_listener.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "ff2b7ae2bb8061a36a8223f5c34a970b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657593.28-112233631951978/source", "state": "file", "uid": 0} >2018-06-22 04:53:14,378 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": true, "checksum": "1f5cc060becbca7be3515f39537993b91e109a6d", "dest": "/var/lib/kolla/config_files/swift_container_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "59a9944c2c3c07fec0293d2efd7d8082", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657593.83-73049650497352/source", "state": "file", "uid": 0} >2018-06-22 04:53:14,931 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": true, "checksum": "596ee1b7f45471d04a0bc3d985f82ad722631b98", "dest": "/var/lib/kolla/config_files/aodh_evaluator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "94c5432632bf2acca69de0063414183b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 245, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657594.39-267694205223371/source", "state": "file", "uid": 0} >2018-06-22 04:53:15,484 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657594.94-243292814442305/source", "state": "file", "uid": 0} >2018-06-22 04:53:16,036 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657595.49-210967943901005/source", "state": "file", "uid": 0} >2018-06-22 04:53:16,587 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": true, "checksum": "1a38774f0fed561a8f1ad8c7f0a976a71a7f7008", "dest": "/var/lib/kolla/config_files/gnocchi_statsd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "b98425b2f26d4e30448a72685b1f89ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 470, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657596.04-125877122881767/source", "state": "file", "uid": 0} >2018-06-22 04:53:17,151 p=11115 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": true, "checksum": "fc55910103403d0bb92e62e940dbd536aff43f84", "dest": "/var/lib/kolla/config_files/horizon.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "md5sum": "77504b6ea1f544f3c70dbc4115bfc354", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 587, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657596.59-170881550744964/source", "state": "file", "uid": 0} >2018-06-22 04:53:17,211 p=11115 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-22 04:53:17,222 p=11115 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 04:53:17,247 p=11115 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 04:53:17,274 p=11115 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 04:53:17,300 p=11115 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-22 04:53:17,898 p=11115 u=mistral | changed: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": true, "checksum": "730e4e048205e1fadc6cd518326d4622d77edad6", "dest": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "md5sum": "56e31c6a27d11dc618833f5679009c9d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657597.35-38935569640908/source", "state": "file", "uid": 0} >2018-06-22 04:53:17,922 p=11115 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-22 04:53:17,951 p=11115 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:53:17,976 p=11115 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:53:17,994 p=11115 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:53:18,021 p=11115 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-22 04:53:18,703 p=11115 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657598.1-218205623805754/source", "state": "file", "uid": 0} >2018-06-22 04:53:18,805 p=11115 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657598.15-10832147738232/source", "state": "file", "uid": 0} >2018-06-22 04:53:18,818 p=11115 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657598.17-124256530697142/source", "state": "file", "uid": 0} >2018-06-22 04:53:18,843 p=11115 u=mistral | TASK [Run puppet host configuration for step 1] ******************************** >2018-06-22 04:53:34,157 p=11115 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 04:53:35,150 p=11115 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 04:55:33,390 p=11115 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 04:55:33,415 p=11115 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** >2018-06-22 04:55:33,556 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.46 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}8cd5ea7a71047b590f89d618413c6eb5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}a839b1ab3552f629efbcc7aaf42e7964'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 125.35 seconds", > "Changes:", > " Total: 166", > "Events:", > " Success: 166", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 216", > " Restarted: 5", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " User: 0.04", > " Sysctl: 0.13", > " File: 0.17", > " Sysctl runtime: 0.19", > " Package: 0.40", > " Pcmk property: 1.00", > " Exec: 104.40", > " Total: 125.38", > " Firewall: 13.63", > " Last run: 1529657732", > " Service: 2.49", > " Config retrieval: 2.91", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1529657604", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 04:55:33,571 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.81 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}8cd5ea7a71047b590f89d618413c6eb5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 7.60 seconds", > "Changes:", > " Total: 98", > "Events:", > " Success: 98", > "Resources:", > " Total: 141", > " Restarted: 3", > " Out of sync: 98", > " Changed: 98", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.06", > " File: 0.14", > " Sysctl runtime: 0.19", > " Package: 0.25", > " Service: 1.26", > " Exec: 1.97", > " Last run: 1529657614", > " Config retrieval: 2.10", > " Firewall: 2.43", > " Total: 8.42", > "Version:", > " Config: 1529657605", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 04:55:33,585 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.86 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}8cd5ea7a71047b590f89d618413c6eb5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 6.96 seconds", > "Changes:", > " Total: 92", > "Events:", > " Success: 92", > "Resources:", > " Total: 135", > " Restarted: 3", > " Out of sync: 92", > " Changed: 92", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.12", > " File: 0.14", > " Sysctl runtime: 0.18", > " Package: 0.25", > " Service: 1.41", > " Firewall: 1.63", > " Exec: 1.98", > " Last run: 1529657613", > " Config retrieval: 2.14", > " Total: 7.87", > " Concat fragment: 0.00", > "Version:", > " Config: 1529657604", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 04:55:33,610 p=11115 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 1] ***************** >2018-06-22 04:55:54,512 p=11115 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:56:24,570 p=11115 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:58:07,516 p=11115 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:58:07,538 p=11115 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** >2018-06-22 04:58:07,655 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-22 08:55:34,100 INFO: 20433 -- Running docker-puppet", > "2018-06-22 08:55:34,101 DEBUG: 20433 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-22 08:55:34,101 DEBUG: 20433 -- config_volume crond", > "2018-06-22 08:55:34,101 DEBUG: 20433 -- puppet_tags ", > "2018-06-22 08:55:34,101 DEBUG: 20433 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 08:55:34,101 DEBUG: 20433 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:34,101 DEBUG: 20433 -- volumes []", > "2018-06-22 08:55:34,101 DEBUG: 20433 -- Adding new service", > "2018-06-22 08:55:34,101 INFO: 20433 -- Service compilation completed.", > "2018-06-22 08:55:34,102 DEBUG: 20433 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-22 08:55:34,102 INFO: 20433 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-22 08:55:34,113 INFO: 20434 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:34,113 DEBUG: 20434 -- config_volume crond", > "2018-06-22 08:55:34,114 DEBUG: 20434 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-22 08:55:34,114 DEBUG: 20434 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 08:55:34,114 DEBUG: 20434 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:34,114 DEBUG: 20434 -- volumes []", > "2018-06-22 08:55:34,115 INFO: 20434 -- Removing container: docker-puppet-crond", > "2018-06-22 08:55:34,202 INFO: 20434 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:46,982 DEBUG: 20434 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Waiting", > "121ab4741000: Verifying Checksum", > "121ab4741000: Download complete", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "", > "2018-06-22 08:55:46,985 DEBUG: 20434 -- NET_HOST enabled", > "2018-06-22 08:55:46,986 DEBUG: 20434 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=ceph-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp74cnyI:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:54,346 DEBUG: 20434 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 0.58 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.65", > " Total: 0.66", > " Last run: 1529657753", > "Version:", > " Config: 1529657752", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-22 08:55:47.253972674 +0000", > "2018-06-22 08:55:54,346 DEBUG: 20434 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=ceph-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:55:47.253972674 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ md5sum", > "tar: Removing leading `/' from member names", > "+ awk '{print $1}'", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-22 08:55:54,346 INFO: 20434 -- Removing container: docker-puppet-crond", > "2018-06-22 08:55:54,394 DEBUG: 20434 -- docker-puppet-crond", > "2018-06-22 08:55:54,394 INFO: 20434 -- Finished processing puppet configs for crond", > "2018-06-22 08:55:54,396 DEBUG: 20433 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-22 08:55:54,397 DEBUG: 20433 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-22 08:55:54,400 DEBUG: 20433 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 08:55:54,401 DEBUG: 20433 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 08:55:54,401 DEBUG: 20433 -- Updating config hash for logrotate_crond, config_volume=crond hash=bb58377065843a54ef976ad9569f4b07" > ] >} >2018-06-22 04:58:07,996 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-22 08:55:34,097 INFO: 24510 -- Running docker-puppet", > "2018-06-22 08:55:34,097 DEBUG: 24510 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- config_volume ceilometer", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- puppet_tags ceilometer_config", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- volumes []", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- Adding new service", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- config_volume neutron", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- puppet_tags neutron_plugin_ml2", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-06-22 08:55:34,098 DEBUG: 24510 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- Adding new service", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- config_volume neutron", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- config_volume iscsid", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- puppet_tags iscsid_config", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- config_volume nova_libvirt", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- puppet_tags nova_config,nova_paste_api_ini", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "# We'll probably treat it like we do with Neutron plugins.", > "# Until then, just include it in the default nova-compute role.", > "include tripleo::profile::base::nova::compute::libvirt", > "include ::tripleo::profile::base::database::mysql::client", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 08:55:34,099 DEBUG: 24510 -- volumes []", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- puppet_tags libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- manifest include tripleo::profile::base::nova::libvirt", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- volumes []", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- config_volume nova_libvirt", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- puppet_tags ", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- manifest include ::tripleo::profile::base::sshd", > "include tripleo::profile::base::nova::migration::target", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- config_volume crond", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:34,100 DEBUG: 24510 -- Adding new service", > "2018-06-22 08:55:34,100 INFO: 24510 -- Service compilation completed.", > "2018-06-22 08:55:34,101 DEBUG: 24510 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', []]", > "2018-06-22 08:55:34,101 DEBUG: 24510 -- - [u'nova_libvirt', u'file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password', u\"# TODO(emilien): figure how to deal with libvirt profile.\\n# We'll probably treat it like we do with Neutron plugins.\\n# Until then, just include it in the default nova-compute role.\\ninclude tripleo::profile::base::nova::compute::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sshd\\ninclude tripleo::profile::base::nova::migration::target\", u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', []]", > "2018-06-22 08:55:34,101 DEBUG: 24510 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-22 08:55:34,101 DEBUG: 24510 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-06-22 08:55:34,101 DEBUG: 24510 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', [u'/etc/iscsi:/etc/iscsi']]", > "2018-06-22 08:55:34,101 INFO: 24510 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-22 08:55:34,114 INFO: 24511 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:55:34,114 INFO: 24512 -- Starting configuration of nova_libvirt using image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 08:55:34,114 DEBUG: 24511 -- config_volume ceilometer", > "2018-06-22 08:55:34,114 DEBUG: 24512 -- config_volume nova_libvirt", > "2018-06-22 08:55:34,114 DEBUG: 24511 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config", > "2018-06-22 08:55:34,115 DEBUG: 24512 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-06-22 08:55:34,115 DEBUG: 24511 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-06-22 08:55:34,115 DEBUG: 24512 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "include tripleo::profile::base::nova::libvirt", > "include ::tripleo::profile::base::sshd", > "2018-06-22 08:55:34,115 DEBUG: 24511 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:55:34,115 DEBUG: 24512 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 08:55:34,115 DEBUG: 24511 -- volumes []", > "2018-06-22 08:55:34,115 DEBUG: 24512 -- volumes []", > "2018-06-22 08:55:34,116 INFO: 24513 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:34,116 INFO: 24512 -- Removing container: docker-puppet-nova_libvirt", > "2018-06-22 08:55:34,116 DEBUG: 24513 -- config_volume crond", > "2018-06-22 08:55:34,116 DEBUG: 24513 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-22 08:55:34,116 DEBUG: 24513 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 08:55:34,117 DEBUG: 24513 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:34,117 DEBUG: 24513 -- volumes []", > "2018-06-22 08:55:34,117 INFO: 24511 -- Removing container: docker-puppet-ceilometer", > "2018-06-22 08:55:34,117 INFO: 24513 -- Removing container: docker-puppet-crond", > "2018-06-22 08:55:34,203 INFO: 24513 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:34,203 INFO: 24512 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 08:55:34,206 INFO: 24511 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:55:46,891 DEBUG: 24513 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Waiting", > "121ab4741000: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:46,895 DEBUG: 24513 -- NET_HOST enabled", > "2018-06-22 08:55:46,895 DEBUG: 24513 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGB4QfE:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:53,700 DEBUG: 24511 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "c66228eb2ac7: Pulling fs layer", > "333aa6b2b383: Pulling fs layer", > "1eb9ef5adcb4: Pulling fs layer", > "c66228eb2ac7: Waiting", > "333aa6b2b383: Waiting", > "1eb9ef5adcb4: Waiting", > "c66228eb2ac7: Verifying Checksum", > "c66228eb2ac7: Download complete", > "333aa6b2b383: Verifying Checksum", > "333aa6b2b383: Download complete", > "1eb9ef5adcb4: Verifying Checksum", > "1eb9ef5adcb4: Download complete", > "c66228eb2ac7: Pull complete", > "333aa6b2b383: Pull complete", > "1eb9ef5adcb4: Pull complete", > "Digest: sha256:3f638e03aaf1d7e303183e06ff1627a5a0efeaef228a7be1e9667ae62d7d6a1b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:55:53,704 DEBUG: 24511 -- NET_HOST enabled", > "2018-06-22 08:55:53,704 DEBUG: 24511 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config --env NAME=ceilometer --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmps1Tp5t:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:55:55,380 DEBUG: 24513 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.54 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.44 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " Cron: 0.01", > " File: 0.28", > " Config retrieval: 0.66", > " Total: 0.94", > " Last run: 1529657754", > "Version:", > " Config: 1529657753", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-22 08:55:47.263739161 +0000", > "2018-06-22 08:55:55,380 DEBUG: 24513 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=compute-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:55:47.263739161 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ awk '{print $1}'", > "+ md5sum", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-22 08:55:55,380 INFO: 24513 -- Removing container: docker-puppet-crond", > "2018-06-22 08:55:55,434 DEBUG: 24513 -- docker-puppet-crond", > "2018-06-22 08:55:55,434 INFO: 24513 -- Finished processing puppet configs for crond", > "2018-06-22 08:55:55,434 INFO: 24513 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:55:55,434 DEBUG: 24513 -- config_volume neutron", > "2018-06-22 08:55:55,434 DEBUG: 24513 -- puppet_tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-22 08:55:55,434 DEBUG: 24513 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "include ::tripleo::profile::base::neutron::ovs", > "2018-06-22 08:55:55,435 DEBUG: 24513 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:55:55,435 DEBUG: 24513 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-22 08:55:55,435 INFO: 24513 -- Removing container: docker-puppet-neutron", > "2018-06-22 08:55:55,534 INFO: 24513 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:56:00,379 DEBUG: 24513 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "ea1d509b6f44: Pulling fs layer", > "e9f9993bb931: Pulling fs layer", > "e9f9993bb931: Verifying Checksum", > "e9f9993bb931: Download complete", > "ea1d509b6f44: Verifying Checksum", > "ea1d509b6f44: Download complete", > "ea1d509b6f44: Pull complete", > "e9f9993bb931: Pull complete", > "Digest: sha256:af12594500608f07f8d38590e2c9b2983e5d81ae8b63aec042f36411b0e76adc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:56:00,382 DEBUG: 24513 -- NET_HOST enabled", > "2018-06-22 08:56:00,382 DEBUG: 24513 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpv7blIn:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:56:02,380 DEBUG: 24511 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.14 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/metering_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.85 seconds", > " Total: 29", > " Success: 29", > " Total: 141", > " Skipped: 22", > " Out of sync: 29", > " Changed: 29", > " Resources: 0.00", > " Ceilometer config: 0.72", > " Config retrieval: 1.33", > " Last run: 1529657761", > " Total: 2.06", > " Config: 1529657759", > "Gathering files modified after 2018-06-22 08:55:53.937698259 +0000", > "2018-06-22 08:56:02,380 DEBUG: 24511 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:55:53.937698259 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-06-22 08:56:02,380 INFO: 24511 -- Removing container: docker-puppet-ceilometer", > "2018-06-22 08:56:02,434 DEBUG: 24511 -- docker-puppet-ceilometer", > "2018-06-22 08:56:02,434 INFO: 24511 -- Finished processing puppet configs for ceilometer", > "2018-06-22 08:56:02,434 INFO: 24511 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:02,434 DEBUG: 24511 -- config_volume iscsid", > "2018-06-22 08:56:02,434 DEBUG: 24511 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-06-22 08:56:02,434 DEBUG: 24511 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-22 08:56:02,435 DEBUG: 24511 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:02,435 DEBUG: 24511 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-22 08:56:02,435 INFO: 24511 -- Removing container: docker-puppet-iscsid", > "2018-06-22 08:56:02,530 INFO: 24511 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:03,255 DEBUG: 24511 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "ab4eae34093d: Pulling fs layer", > "ab4eae34093d: Verifying Checksum", > "ab4eae34093d: Download complete", > "ab4eae34093d: Pull complete", > "Digest: sha256:a46aa93fee87b0f173118da5c2a18dc271772adb839a481ec07f2a53534ac53c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:03,258 DEBUG: 24511 -- NET_HOST enabled", > "2018-06-22 08:56:03,259 DEBUG: 24511 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpOkpu4G:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:07,816 DEBUG: 24512 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-compute ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-compute", > "0e3031608420: Pulling fs layer", > "9c13697fe587: Pulling fs layer", > "9c13697fe587: Waiting", > "0e3031608420: Waiting", > "0e3031608420: Verifying Checksum", > "0e3031608420: Download complete", > "9c13697fe587: Verifying Checksum", > "9c13697fe587: Download complete", > "0e3031608420: Pull complete", > "9c13697fe587: Pull complete", > "Digest: sha256:c6b75506ba5602b470f8dbfdcc57e0bcd20fc363d265aa234469343e439fa65a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 08:56:07,819 DEBUG: 24512 -- NET_HOST enabled", > "2018-06-22 08:56:07,819 DEBUG: 24512 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_libvirt --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpTKFfYG:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 08:56:09,803 DEBUG: 24513 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.21 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 0.81 seconds", > " Total: 48", > " Success: 48", > " Total: 174", > " Skipped: 27", > " Out of sync: 48", > " Changed: 48", > " File: 0.01", > " Neutron plugin ml2: 0.03", > " Neutron agent ovs: 0.06", > " Neutron config: 0.49", > " Last run: 1529657768", > " Config retrieval: 2.42", > " Total: 3.00", > " Config: 1529657765", > "Gathering files modified after 2018-06-22 08:56:00.602658478 +0000", > "2018-06-22 08:56:09,804 DEBUG: 24513 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 530]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ml2.pp\", 45]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 132]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync_srcs+=' /var/www'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:00.602658478 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-06-22 08:56:09,804 INFO: 24513 -- Removing container: docker-puppet-neutron", > "2018-06-22 08:56:09,832 DEBUG: 24511 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.41 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > "Notice: Applied catalog in 0.03 seconds", > " Total: 10", > " Skipped: 8", > " File: 0.00", > " Exec: 0.02", > " Config retrieval: 0.53", > " Total: 0.55", > " Last run: 1529657769", > " Config: 1529657768", > "Gathering files modified after 2018-06-22 08:56:03.488641620 +0000", > "2018-06-22 08:56:09,832 DEBUG: 24511 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:03.488641620 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-06-22 08:56:09,832 INFO: 24511 -- Removing container: docker-puppet-iscsid", > "2018-06-22 08:56:09,867 DEBUG: 24513 -- docker-puppet-neutron", > "2018-06-22 08:56:09,867 INFO: 24513 -- Finished processing puppet configs for neutron", > "2018-06-22 08:56:09,881 DEBUG: 24511 -- docker-puppet-iscsid", > "2018-06-22 08:56:09,881 INFO: 24511 -- Finished processing puppet configs for iscsid", > "2018-06-22 08:56:24,445 DEBUG: 24512 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.97 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{md5}056b96e7e8124e1bc55f77cba4e68ce7' to '{md5}a5a5f8a3e1fda6c42681ae00f4ddf02d'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{md5}09c4fa846e8e27bfa3ab3325900d63ea' to '{md5}2f138c0278e1b666ec77a6d8ba3054a1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{md5}dff145cb4e519333c0096aae8de2e77c' to '{md5}0a97037bb44fd64d20c1ae93194fa091'", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/vncserver_proxyclient_address]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/keymap]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[glance/verify_glance_signatures]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tls]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tcp]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{md5}cfce3c4aa78e4e5b779d7deebcbeb575'", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/vncserver_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_group]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_ro]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_rw]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_ro_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_rw_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}fa73c8727bebcfc4b863b178339e54c4'", > "Notice: Applied catalog in 7.75 seconds", > " Total: 103", > " Success: 103", > " Changed: 103", > " Out of sync: 103", > " Total: 313", > " Skipped: 47", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File line: 0.00", > " Exec: 0.01", > " Libvirtd config: 0.02", > " File: 0.03", > " Package: 0.08", > " Augeas: 0.56", > " Total: 10.77", > " Last run: 1529657783", > " Config retrieval: 3.32", > " Nova config: 6.74", > " Config: 1529657772", > "Gathering files modified after 2018-06-22 08:56:08.010615565 +0000", > "2018-06-22 08:56:24,446 DEBUG: 24512 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password'", > "+ origin_of_time=/var/lib/config-data/nova_libvirt.origin_of_time", > "+ touch /var/lib/config-data/nova_libvirt.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Unknown variable: '::nova::vncproxy::host'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:31:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_protocol'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:36:5", > "Warning: Unknown variable: '::nova::vncproxy::port'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:41:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_path'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:46:5", > "Warning: Unknown variable: '::nova::compute::pci_passthrough'. at /etc/puppet/modules/nova/manifests/compute/pci.pp:19:38", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/compute/libvirt.pp\", 278]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute/libvirt.pp\", 33]", > " with Stdlib::Compat::Ip_Address. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Exec[set libvirt sasl credentials](provider=posix): Cannot understand environment setting \"TLS_PASSWORD=\"", > "+ rsync_srcs+=' /var/lib/nova/.ssh'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/nova/.ssh /var/lib/config-data/nova_libvirt", > "++ stat -c %y /var/lib/config-data/nova_libvirt.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:08.010615565 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_libvirt", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_libvirt", > "++ find /etc /root /opt /var/spool/cron /var/lib/nova/.ssh -newer /var/lib/config-data/nova_libvirt.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_libvirt --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_libvirt --mtime=1970-01-01", > "2018-06-22 08:56:24,446 INFO: 24512 -- Removing container: docker-puppet-nova_libvirt", > "2018-06-22 08:56:24,485 DEBUG: 24512 -- docker-puppet-nova_libvirt", > "2018-06-22 08:56:24,485 INFO: 24512 -- Finished processing puppet configs for nova_libvirt", > "2018-06-22 08:56:24,486 DEBUG: 24510 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-22 08:56:24,486 DEBUG: 24510 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-22 08:56:24,488 DEBUG: 24510 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:56:24,488 DEBUG: 24510 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:56:24,488 DEBUG: 24510 -- Updating config hash for neutron_ovs_bridge, config_volume=iscsid hash=3906b40b63a7a48b090596695e6654d7", > "2018-06-22 08:56:24,489 DEBUG: 24510 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 08:56:24,489 DEBUG: 24510 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 08:56:24,489 DEBUG: 24510 -- Updating config hash for nova_libvirt, config_volume=iscsid hash=edd389754844fe26392c726d84a174d3", > "2018-06-22 08:56:24,489 DEBUG: 24510 -- Updating config hash for nova_virtlogd, config_volume=iscsid hash=edd389754844fe26392c726d84a174d3", > "2018-06-22 08:56:24,491 DEBUG: 24510 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 08:56:24,491 DEBUG: 24510 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 08:56:24,491 DEBUG: 24510 -- Updating config hash for ceilometer_agent_compute, config_volume=iscsid hash=d53724dae0b1d6f13bc39da4d6d9c8ad", > "2018-06-22 08:56:24,491 DEBUG: 24510 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt/etc", > "2018-06-22 08:56:24,491 DEBUG: 24510 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:56:24,491 DEBUG: 24510 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:56:24,491 DEBUG: 24510 -- Updating config hash for neutron_ovs_agent, config_volume=iscsid hash=3906b40b63a7a48b090596695e6654d7", > "2018-06-22 08:56:24,491 DEBUG: 24510 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 08:56:24,491 DEBUG: 24510 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 08:56:24,492 DEBUG: 24510 -- Updating config hash for nova_migration_target, config_volume=iscsid hash=edd389754844fe26392c726d84a174d3", > "2018-06-22 08:56:24,492 DEBUG: 24510 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 08:56:24,492 DEBUG: 24510 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 08:56:24,492 DEBUG: 24510 -- Updating config hash for nova_compute, config_volume=iscsid hash=edd389754844fe26392c726d84a174d3", > "2018-06-22 08:56:24,492 DEBUG: 24510 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 08:56:24,492 DEBUG: 24510 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 08:56:24,492 DEBUG: 24510 -- Updating config hash for logrotate_crond, config_volume=iscsid hash=9eea5bfedf3f3972dbd6194f3019acf7" > ] >} >2018-06-22 04:58:08,600 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-22 08:55:34,079 INFO: 9544 -- Running docker-puppet", > "2018-06-22 08:55:34,079 DEBUG: 9544 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-22 08:55:34,079 DEBUG: 9544 -- config_volume aodh", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- puppet_tags aodh_api_paste_ini,aodh_config", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- manifest include tripleo::profile::base::aodh::api", > "", > "include ::tripleo::profile::base::database::mysql::client", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- config_volume aodh", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- puppet_tags aodh_config", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- manifest include tripleo::profile::base::aodh::evaluator", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,080 DEBUG: 9544 -- manifest include tripleo::profile::base::aodh::listener", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- config_volume aodh", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- puppet_tags aodh_config", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- manifest include tripleo::profile::base::aodh::notifier", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- config_volume ceilometer", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- puppet_tags ceilometer_config", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,081 DEBUG: 9544 -- manifest include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-06-22 08:55:34,082 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,082 DEBUG: 9544 -- config_volume cinder", > "2018-06-22 08:55:34,082 DEBUG: 9544 -- puppet_tags cinder_config,file,concat,file_line", > "2018-06-22 08:55:34,082 DEBUG: 9544 -- manifest include ::tripleo::profile::base::cinder::api", > "2018-06-22 08:55:34,082 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 08:55:34,082 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,082 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,082 DEBUG: 9544 -- manifest include ::tripleo::profile::base::cinder::backup::ceph", > "2018-06-22 08:55:34,082 DEBUG: 9544 -- manifest include ::tripleo::profile::base::cinder::scheduler", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- puppet_tags cinder_config,file,concat,file_line", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- manifest include ::tripleo::profile::base::lvm", > "include ::tripleo::profile::base::cinder::volume", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- config_volume clustercheck", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- puppet_tags file", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- config_volume glance_api", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- puppet_tags glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- manifest include ::tripleo::profile::base::glance::api", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- config_volume gnocchi", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- puppet_tags gnocchi_api_paste_ini,gnocchi_config", > "2018-06-22 08:55:34,083 DEBUG: 9544 -- manifest include ::tripleo::profile::base::gnocchi::api", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- config_volume gnocchi", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- puppet_tags gnocchi_config", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- manifest include ::tripleo::profile::base::gnocchi::metricd", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- manifest include ::tripleo::profile::base::gnocchi::statsd", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- config_volume haproxy", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- puppet_tags haproxy_config", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}", > "['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::pacemaker::haproxy_bundle", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 08:55:34,084 DEBUG: 9544 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- config_volume heat_api", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- puppet_tags heat_config,file,concat,file_line", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- manifest include ::tripleo::profile::base::heat::api", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- config_volume heat_api_cfn", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- config_volume heat", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-06-22 08:55:34,085 DEBUG: 9544 -- config_volume horizon", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- puppet_tags horizon_config", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- manifest include ::tripleo::profile::base::horizon", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- config_volume iscsid", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- puppet_tags iscsid_config", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- config_volume keystone", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- puppet_tags keystone_config,keystone_domain_config", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::keystone", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- config_volume memcached", > "2018-06-22 08:55:34,086 DEBUG: 9544 -- puppet_tags file", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- manifest include ::tripleo::profile::base::memcached", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- config_volume mysql", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- puppet_tags file", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "exec {'wait-for-settle': command => '/bin/true' }", > "include ::tripleo::profile::pacemaker::database::mysql_bundle", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- config_volume neutron", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- puppet_tags neutron_config,neutron_api_config", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- manifest include tripleo::profile::base::neutron::server", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- puppet_tags neutron_plugin_ml2", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-06-22 08:55:34,087 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- config_volume neutron", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- puppet_tags neutron_config,neutron_dhcp_agent_config", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- manifest include tripleo::profile::base::neutron::dhcp", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- puppet_tags neutron_config,neutron_l3_agent_config", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- manifest include tripleo::profile::base::neutron::l3", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- puppet_tags neutron_config,neutron_metadata_agent_config", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- manifest include tripleo::profile::base::neutron::metadata", > "2018-06-22 08:55:34,088 DEBUG: 9544 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- config_volume nova", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- puppet_tags nova_config", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::api", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- manifest include tripleo::profile::base::nova::conductor", > "2018-06-22 08:55:34,089 DEBUG: 9544 -- manifest include tripleo::profile::base::nova::consoleauth", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- config_volume nova_placement", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- puppet_tags nova_config", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- manifest include tripleo::profile::base::nova::placement", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- config_volume nova", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- manifest include tripleo::profile::base::nova::scheduler", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 08:55:34,090 DEBUG: 9544 -- manifest include tripleo::profile::base::nova::vncproxy", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- config_volume crond", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- puppet_tags ", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- config_volume panko", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- puppet_tags panko_api_paste_ini,panko_config", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- manifest include tripleo::profile::base::panko::api", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- config_volume rabbitmq", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- puppet_tags file", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::rabbitmq", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- config_volume redis", > "2018-06-22 08:55:34,091 DEBUG: 9544 -- puppet_tags exec", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- config_volume sahara", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- puppet_tags sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- manifest include ::tripleo::profile::base::sahara::api", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- puppet_tags sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- manifest include ::tripleo::profile::base::sahara::engine", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- config_volume swift", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- puppet_tags swift_config,swift_proxy_config,swift_keymaster_config", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- manifest include ::tripleo::profile::base::swift::proxy", > "2018-06-22 08:55:34,092 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- Adding new service", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- config_volume swift_ringbuilder", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- puppet_tags exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- volumes []", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- config_volume swift", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- puppet_tags swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- manifest include ::tripleo::profile::base::swift::storage", > "class xinetd() {}", > "2018-06-22 08:55:34,093 DEBUG: 9544 -- Existing service, appending puppet tags and manifest", > "2018-06-22 08:55:34,093 INFO: 9544 -- Service compilation completed.", > "2018-06-22 08:55:34,094 DEBUG: 9544 -- - [u'nova_placement', u'file,file_line,concat,augeas,cron,nova_config', u'include tripleo::profile::base::nova::placement\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,094 DEBUG: 9544 -- - [u'aodh', u'file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config', u'include tripleo::profile::base::aodh::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::evaluator\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::listener\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::notifier\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,094 DEBUG: 9544 -- - [u'heat_api', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,094 DEBUG: 9544 -- - [u'swift_ringbuilder', u'file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', u'include ::tripleo::profile::base::swift::ringbuilder', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', []]", > "2018-06-22 08:55:34,094 DEBUG: 9544 -- - [u'sahara', u'file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template', u'include ::tripleo::profile::base::sahara::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sahara::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,094 DEBUG: 9544 -- - [u'mysql', u'file,file_line,concat,augeas,cron,file', u\"['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\\nexec {'wait-for-settle': command => '/bin/true' }\\ninclude ::tripleo::profile::pacemaker::database::mysql_bundle\", u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'gnocchi', u'file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config', u'include ::tripleo::profile::base::gnocchi::api\\n\\ninclude ::tripleo::profile::base::gnocchi::metricd\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::gnocchi::statsd\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'clustercheck', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::pacemaker::clustercheck', u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'redis', u'file,file_line,concat,augeas,cron,exec', u'include ::tripleo::profile::pacemaker::database::redis_bundle', u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'nova', u'file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config', u\"['Nova_cell_v2'].each |String $val| { noop_resource($val) }\\ninclude tripleo::profile::base::nova::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::conductor\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::consoleauth\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::vncproxy\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', [u'/etc/iscsi:/etc/iscsi']]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'glance_api', u'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config', u'include ::tripleo::profile::base::glance::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'keystone', u'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', u\"['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::keystone\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'memcached', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::base::memcached\\n', u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'panko', u'file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config', u'include tripleo::profile::base::panko::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'heat', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'cinder', u'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', u'include ::tripleo::profile::base::cinder::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::backup::ceph\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::lvm\\ninclude ::tripleo::profile::base::cinder::volume\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'swift', u'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', u'include ::tripleo::profile::base::swift::proxy\\n\\ninclude ::tripleo::profile::base::swift::storage\\n\\nclass xinetd() {}', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'haproxy', u'file,file_line,concat,augeas,cron,haproxy_config', u\"exec {'wait-for-settle': command => '/bin/true' }\\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::pacemaker::haproxy_bundle\", u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n\\ninclude ::tripleo::profile::base::ceilometer::agent::notification\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'rabbitmq', u'file,file_line,concat,augeas,cron,file', u\"['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::rabbitmq\\n\", u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include tripleo::profile::base::neutron::server\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude tripleo::profile::base::neutron::dhcp\\n\\ninclude tripleo::profile::base::neutron::l3\\n\\ninclude tripleo::profile::base::neutron::metadata\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'horizon', u'file,file_line,concat,augeas,cron,horizon_config', u'include ::tripleo::profile::base::horizon\\n', u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', []]", > "2018-06-22 08:55:34,095 DEBUG: 9544 -- - [u'heat_api_cfn', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api_cfn\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', []]", > "2018-06-22 08:55:34,096 INFO: 9544 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-22 08:55:34,108 INFO: 9545 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 08:55:34,109 DEBUG: 9545 -- config_volume nova_placement", > "2018-06-22 08:55:34,108 INFO: 9546 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:55:34,109 DEBUG: 9545 -- puppet_tags file,file_line,concat,augeas,cron,nova_config", > "2018-06-22 08:55:34,109 DEBUG: 9546 -- config_volume swift_ringbuilder", > "2018-06-22 08:55:34,109 DEBUG: 9545 -- manifest include tripleo::profile::base::nova::placement", > "2018-06-22 08:55:34,109 DEBUG: 9546 -- puppet_tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-06-22 08:55:34,109 DEBUG: 9545 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 08:55:34,109 DEBUG: 9546 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-06-22 08:55:34,109 DEBUG: 9545 -- volumes []", > "2018-06-22 08:55:34,109 DEBUG: 9546 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:55:34,109 DEBUG: 9546 -- volumes []", > "2018-06-22 08:55:34,109 INFO: 9547 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 08:55:34,109 DEBUG: 9547 -- config_volume gnocchi", > "2018-06-22 08:55:34,109 DEBUG: 9547 -- puppet_tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config", > "2018-06-22 08:55:34,110 DEBUG: 9547 -- manifest include ::tripleo::profile::base::gnocchi::api", > "include ::tripleo::profile::base::gnocchi::metricd", > "include ::tripleo::profile::base::gnocchi::statsd", > "2018-06-22 08:55:34,110 DEBUG: 9547 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 08:55:34,110 DEBUG: 9547 -- volumes []", > "2018-06-22 08:55:34,111 INFO: 9545 -- Removing container: docker-puppet-nova_placement", > "2018-06-22 08:55:34,111 INFO: 9546 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-06-22 08:55:34,111 INFO: 9547 -- Removing container: docker-puppet-gnocchi", > "2018-06-22 08:55:34,208 INFO: 9547 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 08:55:34,209 INFO: 9546 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:55:34,211 INFO: 9545 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 08:55:52,712 DEBUG: 9546 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "c66228eb2ac7: Pulling fs layer", > "a98c7da29d65: Pulling fs layer", > "c4603b657b73: Pulling fs layer", > "c66228eb2ac7: Waiting", > "c4603b657b73: Waiting", > "a98c7da29d65: Waiting", > "121ab4741000: Verifying Checksum", > "121ab4741000: Download complete", > "c66228eb2ac7: Verifying Checksum", > "c66228eb2ac7: Download complete", > "a98c7da29d65: Verifying Checksum", > "a98c7da29d65: Download complete", > "a8ff0031dfcb: Download complete", > "e0f71f706c2a: Download complete", > "c4603b657b73: Verifying Checksum", > "c4603b657b73: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "c66228eb2ac7: Pull complete", > "a98c7da29d65: Pull complete", > "c4603b657b73: Pull complete", > "Digest: sha256:632f29598f1ea7b96a5573d0b5a942b3a1f571783804cdc07dac0910e97d1a87", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:55:52,715 DEBUG: 9546 -- NET_HOST enabled", > "2018-06-22 08:55:52,716 DEBUG: 9546 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift_ringbuilder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpmWoOD2:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:55:56,423 DEBUG: 9545 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-placement-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-placement-api", > "0e3031608420: Pulling fs layer", > "dd9c4679b681: Pulling fs layer", > "dd9c4679b681: Waiting", > "0e3031608420: Waiting", > "a8ff0031dfcb: Verifying Checksum", > "e0f71f706c2a: Verifying Checksum", > "dd9c4679b681: Verifying Checksum", > "dd9c4679b681: Download complete", > "0e3031608420: Verifying Checksum", > "0e3031608420: Download complete", > "0e3031608420: Pull complete", > "dd9c4679b681: Pull complete", > "Digest: sha256:2336d644bd74c35fe7e050376f6d7a1b718ae6faf3556cf63917aceecdf581b6", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 08:55:56,427 DEBUG: 9545 -- NET_HOST enabled", > "2018-06-22 08:55:56,427 DEBUG: 9545 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_placement --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config --env NAME=nova_placement --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmphG2fSO:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 08:55:59,249 DEBUG: 9547 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-api", > "64612d8109ce: Pulling fs layer", > "2d8b51759f9c: Pulling fs layer", > "64612d8109ce: Waiting", > "2d8b51759f9c: Waiting", > "2d8b51759f9c: Verifying Checksum", > "2d8b51759f9c: Download complete", > "64612d8109ce: Verifying Checksum", > "64612d8109ce: Download complete", > "64612d8109ce: Pull complete", > "2d8b51759f9c: Pull complete", > "Digest: sha256:0824e3fa2c22ac0acb43883a29cce2fbdf54a9cce722e559cc5c6325e46c2142", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 08:55:59,252 DEBUG: 9547 -- NET_HOST enabled", > "2018-06-22 08:55:59,253 DEBUG: 9547 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-gnocchi --env PUPPET_TAGS=file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config --env NAME=gnocchi --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpLUB0NU:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 08:56:06,814 DEBUG: 9546 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.15 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[fetch_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'", > "Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.19:%PORT%/d1]/Ring_object_device[172.17.4.19:6000/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.19:%PORT%/d1]/Ring_container_device[172.17.4.19:6001/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.19:%PORT%/d1]/Ring_account_device[172.17.4.19:6002/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[upload_swift_ring_tarball]: Triggered 'refresh' from 2 events", > "Notice: Applied catalog in 4.69 seconds", > "Changes:", > " Total: 11", > "Events:", > " Success: 11", > "Resources:", > " Changed: 11", > " Out of sync: 11", > " Skipped: 19", > " Total: 36", > " Restarted: 6", > "Time:", > " File: 0.00", > " Ring object device: 0.54", > " Ring account device: 0.56", > " Ring container device: 0.60", > " Config retrieval: 1.35", > " Exec: 1.45", > " Last run: 1529657766", > " Total: 4.51", > "Version:", > " Config: 1529657760", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-22 08:55:53.009915376 +0000", > "2018-06-22 08:56:06,815 DEBUG: 9546 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball'", > "+ origin_of_time=/var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ touch /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/ringbuilder.pp\", 113]:[\"/etc/config.pp\", 2]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/ringbuilder/create.pp\", 44]:", > "Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta", > "Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted", > "Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ rsync_srcs+=' /var/www'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift_ringbuilder", > "++ stat -c %y /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:55:53.009915376 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift_ringbuilder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift_ringbuilder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift_ringbuilder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift_ringbuilder --mtime=1970-01-01", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift_ringbuilder --mtime=1970-01-01", > "2018-06-22 08:56:06,815 INFO: 9546 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-06-22 08:56:06,876 DEBUG: 9546 -- docker-puppet-swift_ringbuilder", > "2018-06-22 08:56:06,876 INFO: 9546 -- Finished processing puppet configs for swift_ringbuilder", > "2018-06-22 08:56:06,877 INFO: 9546 -- Starting configuration of sahara using image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 08:56:06,877 DEBUG: 9546 -- config_volume sahara", > "2018-06-22 08:56:06,877 DEBUG: 9546 -- puppet_tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-22 08:56:06,878 DEBUG: 9546 -- manifest include ::tripleo::profile::base::sahara::api", > "include ::tripleo::profile::base::sahara::engine", > "2018-06-22 08:56:06,878 DEBUG: 9546 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 08:56:06,878 DEBUG: 9546 -- volumes []", > "2018-06-22 08:56:06,878 INFO: 9546 -- Removing container: docker-puppet-sahara", > "2018-06-22 08:56:06,954 INFO: 9546 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 08:56:09,482 DEBUG: 9546 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-api", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "6c5f7e9a0fe8: Pulling fs layer", > "5f67eb984180: Pulling fs layer", > "5f67eb984180: Verifying Checksum", > "5f67eb984180: Download complete", > "6c5f7e9a0fe8: Verifying Checksum", > "6c5f7e9a0fe8: Download complete", > "6c5f7e9a0fe8: Pull complete", > "5f67eb984180: Pull complete", > "Digest: sha256:702a41a4d211978832441c041a232227b3d2484d71ef01a8bf7d5332091587a5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 08:56:09,485 DEBUG: 9546 -- NET_HOST enabled", > "2018-06-22 08:56:09,485 DEBUG: 9546 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-sahara --env PUPPET_TAGS=file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template --env NAME=sahara --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpKDcGB6:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 08:56:11,400 DEBUG: 9547 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.65 seconds", > "Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'", > "Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'", > "Notice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'", > "Notice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'", > "Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'", > "Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{md5}fa95c477a2085c1f7f17ee5f8eccfb90'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Gnocchi::Db/Gnocchi_config[indexer/url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/auth_mode]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage/Gnocchi_config[storage/coordination_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/redis_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_keyring]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_pool]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_conffile]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/workers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/metric_processing_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/resource_id]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/archive_policy_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/flush_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Policy/Oslo::Policy[gnocchi_config]/Gnocchi_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Oslo::Middleware[gnocchi_config]/Gnocchi_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}fb601ab83ee93876f97d92e5eb37492d'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}c6d1bc1fdbcb93bbd2596e4703f4108c' to '{md5}ac42062d69afa9d2671492ce0be87b7b'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'", > "Notice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'", > "Notice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'", > "Notice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'", > "Notice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'", > "Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'", > "Notice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'", > "Notice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'", > "Notice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'", > "Notice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'", > "Notice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'", > "Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'", > "Notice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'", > "Notice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'", > "Notice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'", > "Notice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'", > "Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'", > "Notice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'", > "Notice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'", > "Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'", > "Notice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'", > "Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{md5}c7726ef20347ef9a06ef68eeaad79765'", > "Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{md5}e282ac9f82fe5538692a4de3616fb695'", > "Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{md5}ab31a6ea611785f74851b578572e4157'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d/httpd.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed", > "Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/content: content changed '{md5}9e163ce201541f8aa36fcc1a372ed34d' to '{md5}b6f6f2773db25c777f1db887e7a3f57d'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[/var/www/cgi-bin/gnocchi]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[gnocchi_wsgi]/ensure: defined content as '{md5}c03530dd30d25ec70b705e0c2f43df7a'", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/Apache::Vhost[gnocchi_wsgi]/Concat[10-gnocchi_wsgi.conf]/File[/etc/httpd/conf.d/10-gnocchi_wsgi.conf]/ensure: defined content as '{md5}de49d0be1de90b5720500984b038f6fb'", > "Notice: Applied catalog in 1.23 seconds", > " Total: 110", > " Success: 110", > " Changed: 110", > " Out of sync: 110", > " Total: 253", > " Skipped: 42", > " Resources: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Concat fragment: 0.00", > " Augeas: 0.02", > " Gnocchi config: 0.26", > " File: 0.36", > " Last run: 1529657769", > " Config retrieval: 4.21", > " Total: 4.85", > " Config: 1529657764", > "Gathering files modified after 2018-06-22 08:55:59.458826309 +0000", > "2018-06-22 08:56:11,400 DEBUG: 9547 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config'", > "+ origin_of_time=/var/lib/config-data/gnocchi.origin_of_time", > "+ touch /var/lib/config-data/gnocchi.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config /etc/config.pp", > "Warning: ModuleLoader: module 'gnocchi' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/db.pp\", 26]:[\"/etc/puppet/modules/gnocchi/manifests/init.pp\", 54]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/config.pp\", 29]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/gnocchi.pp\", 31]", > "Warning: Scope(Class[Gnocchi::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/gnocchi", > "++ stat -c %y /var/lib/config-data/gnocchi.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:55:59.458826309 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/gnocchi", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/gnocchi", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/gnocchi.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/gnocchi --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/gnocchi --mtime=1970-01-01", > "2018-06-22 08:56:11,400 INFO: 9547 -- Removing container: docker-puppet-gnocchi", > "2018-06-22 08:56:11,452 DEBUG: 9547 -- docker-puppet-gnocchi", > "2018-06-22 08:56:11,452 INFO: 9547 -- Finished processing puppet configs for gnocchi", > "2018-06-22 08:56:11,452 INFO: 9547 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:56:11,452 DEBUG: 9547 -- config_volume clustercheck", > "2018-06-22 08:56:11,452 DEBUG: 9547 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-22 08:56:11,452 DEBUG: 9547 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-06-22 08:56:11,452 DEBUG: 9547 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:56:11,452 DEBUG: 9547 -- volumes []", > "2018-06-22 08:56:11,453 INFO: 9547 -- Removing container: docker-puppet-clustercheck", > "2018-06-22 08:56:11,514 INFO: 9547 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:56:16,942 DEBUG: 9545 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.49 seconds", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}49be14cfb068b41bfaaef9b9fdd0af72'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}80c8f482fa28d5595edb0139d3dcd178'", > "Notice: Applied catalog in 8.24 seconds", > " Total: 132", > " Success: 132", > " Changed: 132", > " Out of sync: 132", > " Total: 371", > " Skipped: 39", > " Package: 0.10", > " File: 0.51", > " Total: 12.78", > " Last run: 1529657775", > " Config retrieval: 5.15", > " Nova config: 7.00", > " Config: 1529657761", > "Gathering files modified after 2018-06-22 08:55:56.656864735 +0000", > "2018-06-22 08:56:16,942 DEBUG: 9545 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova_placement.origin_of_time", > "+ touch /var/lib/config-data/nova_placement.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Scope(Class[Nova::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova_placement", > "++ stat -c %y /var/lib/config-data/nova_placement.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:55:56.656864735 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_placement", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_placement", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova_placement.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_placement --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_placement --mtime=1970-01-01", > "2018-06-22 08:56:16,942 INFO: 9545 -- Removing container: docker-puppet-nova_placement", > "2018-06-22 08:56:16,994 DEBUG: 9545 -- docker-puppet-nova_placement", > "2018-06-22 08:56:16,994 INFO: 9545 -- Finished processing puppet configs for nova_placement", > "2018-06-22 08:56:16,995 INFO: 9545 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 08:56:16,995 DEBUG: 9545 -- config_volume aodh", > "2018-06-22 08:56:16,995 DEBUG: 9545 -- puppet_tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config", > "2018-06-22 08:56:16,995 DEBUG: 9545 -- manifest include tripleo::profile::base::aodh::api", > "include tripleo::profile::base::aodh::evaluator", > "include tripleo::profile::base::aodh::listener", > "include tripleo::profile::base::aodh::notifier", > "2018-06-22 08:56:16,995 DEBUG: 9545 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 08:56:16,995 DEBUG: 9545 -- volumes []", > "2018-06-22 08:56:16,996 INFO: 9545 -- Removing container: docker-puppet-aodh", > "2018-06-22 08:56:17,086 INFO: 9545 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 08:56:18,110 DEBUG: 9547 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-mariadb ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-mariadb", > "2ee1f6a99b58: Pulling fs layer", > "2ee1f6a99b58: Verifying Checksum", > "2ee1f6a99b58: Download complete", > "2ee1f6a99b58: Pull complete", > "Digest: sha256:2a886d2154594b405341b26bdc272a2796459d288a4fde8b2ee6f5ca253f6792", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:56:18,113 DEBUG: 9547 -- NET_HOST enabled", > "2018-06-22 08:56:18,114 DEBUG: 9547 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-clustercheck --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpTF6IOM:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:56:19,340 DEBUG: 9545 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-api", > "cb7d08d4cc0c: Pulling fs layer", > "6e57c8911d7b: Pulling fs layer", > "6e57c8911d7b: Verifying Checksum", > "6e57c8911d7b: Download complete", > "cb7d08d4cc0c: Verifying Checksum", > "cb7d08d4cc0c: Download complete", > "cb7d08d4cc0c: Pull complete", > "6e57c8911d7b: Pull complete", > "Digest: sha256:fa189b1bb39e6c29a0fe5a6e824ae0f89206ba6749e373e719edac2129e0ff6b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 08:56:19,343 DEBUG: 9545 -- NET_HOST enabled", > "2018-06-22 08:56:19,344 DEBUG: 9545 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-aodh --env PUPPET_TAGS=file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config --env NAME=aodh --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpK4gSAp:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 08:56:20,104 DEBUG: 9546 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.12 seconds", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/plugins]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/port]/ensure: created", > "Notice: /Stage[main]/Sahara::Service::Api/Sahara_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Policy/Oslo::Policy[sahara_config]/Sahara_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Default[sahara_config]/Sahara_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Rabbit[sahara_config]/Sahara_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Zmq[sahara_config]/Sahara_config[DEFAULT/rpc_zmq_host]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 1.45 seconds", > " Total: 25", > " Success: 25", > " Total: 196", > " Skipped: 23", > " Out of sync: 25", > " Changed: 25", > " Augeas: 0.03", > " Package: 0.06", > " Sahara config: 1.07", > " Last run: 1529657779", > " Config retrieval: 2.42", > " Total: 3.57", > " Config: 1529657775", > "Gathering files modified after 2018-06-22 08:56:09.676689688 +0000", > "2018-06-22 08:56:20,104 DEBUG: 9546 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template'", > "+ origin_of_time=/var/lib/config-data/sahara.origin_of_time", > "+ touch /var/lib/config-data/sahara.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template /etc/config.pp", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/db.pp\", 69]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 380]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 381]", > "Warning: Scope(Class[Sahara]): The use_neutron parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Sahara]): sahara::admin_user, sahara::admin_password, sahara::auth_uri, sahara::identity_uri, sahara::admin_tenant_name and sahara::memcached_servers are deprecated. Please use sahara::keystone::authtoken::* parameters instead.", > "Warning: Scope(Class[Sahara::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/sahara", > "++ stat -c %y /var/lib/config-data/sahara.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:09.676689688 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/sahara", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/sahara", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/sahara.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/sahara --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/sahara --mtime=1970-01-01", > "2018-06-22 08:56:20,104 INFO: 9546 -- Removing container: docker-puppet-sahara", > "2018-06-22 08:56:20,142 DEBUG: 9546 -- docker-puppet-sahara", > "2018-06-22 08:56:20,143 INFO: 9546 -- Finished processing puppet configs for sahara", > "2018-06-22 08:56:20,143 INFO: 9546 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:56:20,143 DEBUG: 9546 -- config_volume mysql", > "2018-06-22 08:56:20,143 DEBUG: 9546 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-22 08:56:20,144 DEBUG: 9546 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "2018-06-22 08:56:20,144 DEBUG: 9546 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:56:20,144 DEBUG: 9546 -- volumes []", > "2018-06-22 08:56:20,144 INFO: 9546 -- Removing container: docker-puppet-mysql", > "2018-06-22 08:56:20,194 INFO: 9546 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:56:20,197 DEBUG: 9546 -- NET_HOST enabled", > "2018-06-22 08:56:20,197 DEBUG: 9546 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-mysql --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp3y4MEV:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 08:56:24,634 DEBUG: 9547 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.47 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}5b8acaa58a90d174e15437cd06a5f6f1'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{md5}cbe84e58d4f0ebc4672f8ee38f084ba3'", > "Notice: Applied catalog in 0.05 seconds", > " Total: 4", > " Success: 4", > " Total: 13", > " Out of sync: 3", > " Changed: 3", > " Skipped: 9", > " File: 0.03", > " Config retrieval: 0.62", > " Total: 0.65", > " Last run: 1529657783", > " Config: 1529657783", > "Gathering files modified after 2018-06-22 08:56:18.314578366 +0000", > "2018-06-22 08:56:24,634 DEBUG: 9547 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,file ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,file'", > "+ origin_of_time=/var/lib/config-data/clustercheck.origin_of_time", > "+ touch /var/lib/config-data/clustercheck.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,file /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/clustercheck", > "++ stat -c %y /var/lib/config-data/clustercheck.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:18.314578366 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/clustercheck", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/clustercheck", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/clustercheck.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/clustercheck --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/clustercheck --mtime=1970-01-01", > "2018-06-22 08:56:24,635 INFO: 9547 -- Removing container: docker-puppet-clustercheck", > "2018-06-22 08:56:24,682 DEBUG: 9547 -- docker-puppet-clustercheck", > "2018-06-22 08:56:24,682 INFO: 9547 -- Finished processing puppet configs for clustercheck", > "2018-06-22 08:56:24,683 INFO: 9547 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 08:56:24,683 DEBUG: 9547 -- config_volume redis", > "2018-06-22 08:56:24,683 DEBUG: 9547 -- puppet_tags file,file_line,concat,augeas,cron,exec", > "2018-06-22 08:56:24,683 DEBUG: 9547 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-06-22 08:56:24,683 DEBUG: 9547 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 08:56:24,683 DEBUG: 9547 -- volumes []", > "2018-06-22 08:56:24,683 INFO: 9547 -- Removing container: docker-puppet-redis", > "2018-06-22 08:56:24,756 INFO: 9547 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 08:56:28,319 DEBUG: 9547 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-redis ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-redis", > "13055d264df1: Pulling fs layer", > "dfc35b833f61: Pulling fs layer", > "13055d264df1: Verifying Checksum", > "13055d264df1: Download complete", > "13055d264df1: Pull complete", > "dfc35b833f61: Verifying Checksum", > "dfc35b833f61: Download complete", > "dfc35b833f61: Pull complete", > "Digest: sha256:7782f917270ad46f451fe06063a6adb53afe9d81474a7af374ed7b9c09d1b055", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 08:56:28,322 DEBUG: 9547 -- NET_HOST enabled", > "2018-06-22 08:56:28,322 DEBUG: 9547 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-redis --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec --env NAME=redis --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpf5cSM_:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 08:56:31,230 DEBUG: 9546 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.44 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}e51811cf726fa3e6a5a924a379dc5198'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}5a169246460baf3e552027b0f5e8a1f8'", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}6974e9520cec98c6b2b0f624665dbd32'", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Notice: Applied catalog in 0.27 seconds", > " Skipped: 225", > " Total: 230", > " Out of sync: 4", > " Changed: 4", > " File: 0.02", > " Last run: 1529657790", > " Config retrieval: 4.82", > " Total: 4.84", > " Config: 1529657785", > "Gathering files modified after 2018-06-22 08:56:20.381552280 +0000", > "2018-06-22 08:56:31,230 DEBUG: 9546 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/mysql.origin_of_time", > "+ touch /var/lib/config-data/mysql.origin_of_time", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"/etc/config.pp\", 4]", > "Warning: ModuleLoader: module 'aodh' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 58]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'panko' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/mysql", > "++ stat -c %y /var/lib/config-data/mysql.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:20.381552280 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/mysql", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/mysql", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/mysql.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/mysql --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/mysql --mtime=1970-01-01", > "2018-06-22 08:56:31,230 INFO: 9546 -- Removing container: docker-puppet-mysql", > "2018-06-22 08:56:31,271 DEBUG: 9546 -- docker-puppet-mysql", > "2018-06-22 08:56:31,271 INFO: 9546 -- Finished processing puppet configs for mysql", > "2018-06-22 08:56:31,272 INFO: 9546 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 08:56:31,272 DEBUG: 9546 -- config_volume nova", > "2018-06-22 08:56:31,272 DEBUG: 9546 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config", > "2018-06-22 08:56:31,272 DEBUG: 9546 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::conductor", > "include tripleo::profile::base::nova::consoleauth", > "include tripleo::profile::base::nova::scheduler", > "include tripleo::profile::base::nova::vncproxy", > "2018-06-22 08:56:31,272 DEBUG: 9546 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 08:56:31,272 DEBUG: 9546 -- volumes []", > "2018-06-22 08:56:31,272 INFO: 9546 -- Removing container: docker-puppet-nova", > "2018-06-22 08:56:31,340 INFO: 9546 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 08:56:32,650 DEBUG: 9546 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-api", > "0e3031608420: Already exists", > "b32f33ab1345: Pulling fs layer", > "b32f33ab1345: Verifying Checksum", > "b32f33ab1345: Download complete", > "b32f33ab1345: Pull complete", > "Digest: sha256:98f38e1deb6081bcc8d18a914af693593a06823741381f71dacd158824ef18f8", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 08:56:32,653 DEBUG: 9546 -- NET_HOST enabled", > "2018-06-22 08:56:32,654 DEBUG: 9546 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config --env NAME=nova --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp7319OR:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 08:56:32,893 DEBUG: 9545 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.09 seconds", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/gnocchi_external_project_owner]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/host]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/port]/ensure: created", > "Notice: /Stage[main]/Aodh::Evaluator/Aodh_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Db/Oslo::Db[aodh_config]/Aodh_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Rabbit[aodh_config]/Aodh_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Default[aodh_config]/Aodh_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Policy/Oslo::Policy[aodh_config]/Aodh_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Oslo::Middleware[aodh_config]/Aodh_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}d5cc693f7e5ab209bd9dd3aaa62d0015'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/owner: owner changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/group: group changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[aodh_wsgi]/ensure: defined content as '{md5}09d823939c45501c11f2096289fe70cf'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/Apache::Vhost[aodh_wsgi]/Concat[10-aodh_wsgi.conf]/File[/etc/httpd/conf.d/10-aodh_wsgi.conf]/ensure: defined content as '{md5}8af5d50387c01320c5ddb1a078c98e17'", > "Notice: Applied catalog in 2.04 seconds", > " Total: 112", > " Success: 112", > " Changed: 111", > " Out of sync: 111", > " Total: 331", > " Skipped: 40", > " Package: 0.05", > " File: 0.40", > " Aodh config: 0.87", > " Last run: 1529657791", > " Config retrieval: 4.76", > " Total: 6.10", > " Config: 1529657784", > "Gathering files modified after 2018-06-22 08:56:19.588562264 +0000", > "2018-06-22 08:56:32,893 DEBUG: 9545 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config'", > "+ origin_of_time=/var/lib/config-data/aodh.origin_of_time", > "+ touch /var/lib/config-data/aodh.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/aodh.pp\", 123]", > "Warning: Scope(Class[Aodh::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/oslo/manifests/db.pp\", 140]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/aodh", > "++ stat -c %y /var/lib/config-data/aodh.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:19.588562264 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/aodh", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/aodh", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/aodh.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/aodh --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/aodh --mtime=1970-01-01", > "2018-06-22 08:56:32,893 INFO: 9545 -- Removing container: docker-puppet-aodh", > "2018-06-22 08:56:33,060 DEBUG: 9545 -- docker-puppet-aodh", > "2018-06-22 08:56:33,060 INFO: 9545 -- Finished processing puppet configs for aodh", > "2018-06-22 08:56:33,060 INFO: 9545 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:56:33,060 DEBUG: 9545 -- config_volume heat_api", > "2018-06-22 08:56:33,061 DEBUG: 9545 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-22 08:56:33,061 DEBUG: 9545 -- manifest include ::tripleo::profile::base::heat::api", > "2018-06-22 08:56:33,061 DEBUG: 9545 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:56:33,061 DEBUG: 9545 -- volumes []", > "2018-06-22 08:56:33,061 INFO: 9545 -- Removing container: docker-puppet-heat_api", > "2018-06-22 08:56:33,132 INFO: 9545 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:56:35,358 DEBUG: 9545 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api", > "15497368e843: Pulling fs layer", > "a91507f6d5dc: Pulling fs layer", > "a91507f6d5dc: Verifying Checksum", > "a91507f6d5dc: Download complete", > "15497368e843: Verifying Checksum", > "15497368e843: Download complete", > "15497368e843: Pull complete", > "a91507f6d5dc: Pull complete", > "Digest: sha256:7e8eb4cb5943296bd67f2e22c40a7519d3c71f8533541c54da0c9f5ef6b361ce", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:56:35,361 DEBUG: 9545 -- NET_HOST enabled", > "2018-06-22 08:56:35,362 DEBUG: 9545 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpcr4AhI:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:56:35,896 DEBUG: 9547 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.03 seconds", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}a28a4c5619dba8f7ab430570cf6e9ca7'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 0.06 seconds", > " Total: 6", > " Success: 6", > " Restarted: 1", > " Skipped: 11", > " Total: 21", > " Out of sync: 6", > " Changed: 6", > " Exec: 0.00", > " File: 0.01", > " Augeas: 0.01", > " Config retrieval: 1.20", > " Total: 1.23", > " Last run: 1529657795", > " Config: 1529657793", > "Gathering files modified after 2018-06-22 08:56:28.521451571 +0000", > "2018-06-22 08:56:35,896 DEBUG: 9547 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,exec ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec'", > "+ origin_of_time=/var/lib/config-data/redis.origin_of_time", > "+ touch /var/lib/config-data/redis.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec /etc/config.pp", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/redis", > "++ stat -c %y /var/lib/config-data/redis.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:28.521451571 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/redis", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/redis", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/redis.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/redis --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/redis --mtime=1970-01-01", > "2018-06-22 08:56:35,897 INFO: 9547 -- Removing container: docker-puppet-redis", > "2018-06-22 08:56:35,929 DEBUG: 9547 -- docker-puppet-redis", > "2018-06-22 08:56:35,929 INFO: 9547 -- Finished processing puppet configs for redis", > "2018-06-22 08:56:35,930 INFO: 9547 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 08:56:35,930 DEBUG: 9547 -- config_volume keystone", > "2018-06-22 08:56:35,930 DEBUG: 9547 -- puppet_tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config", > "2018-06-22 08:56:35,930 DEBUG: 9547 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "2018-06-22 08:56:35,930 DEBUG: 9547 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 08:56:35,930 DEBUG: 9547 -- volumes []", > "2018-06-22 08:56:35,931 INFO: 9547 -- Removing container: docker-puppet-keystone", > "2018-06-22 08:56:35,994 INFO: 9547 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 08:56:38,401 DEBUG: 9547 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-keystone ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-keystone", > "6222a19b9ac2: Pulling fs layer", > "900dd421e68b: Pulling fs layer", > "900dd421e68b: Verifying Checksum", > "900dd421e68b: Download complete", > "6222a19b9ac2: Verifying Checksum", > "6222a19b9ac2: Download complete", > "6222a19b9ac2: Pull complete", > "900dd421e68b: Pull complete", > "Digest: sha256:5aaa5a4237af74f89ed31c8ff7e97414693ecfb9ce82bcb13f238c1a96030dc5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 08:56:38,404 DEBUG: 9547 -- NET_HOST enabled", > "2018-06-22 08:56:38,404 DEBUG: 9547 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-keystone --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpKtYN2a:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 08:56:48,474 DEBUG: 9545 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.59 seconds", > "Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created", > "Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}b63941036cde729a9461a36dc1e260ed'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}0c7058cc00274f8ef854451d14ed4825'", > "Notice: Applied catalog in 2.49 seconds", > " Total: 121", > " Success: 121", > " Changed: 121", > " Out of sync: 121", > " Skipped: 32", > " Total: 335", > " Cron: 0.01", > " File: 0.43", > " Heat config: 1.43", > " Last run: 1529657806", > " Total: 6.13", > " Config: 1529657800", > "Gathering files modified after 2018-06-22 08:56:35.558367047 +0000", > "2018-06-22 08:56:48,474 DEBUG: 9545 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,heat_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/heat_api.origin_of_time", > "+ touch /var/lib/config-data/heat_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/db.pp\", 75]:[\"/etc/puppet/modules/heat/manifests/init.pp\", 363]", > "Warning: Scope(Class[Heat::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/heat.pp\", 134]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api", > "++ stat -c %y /var/lib/config-data/heat_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:35.558367047 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api --mtime=1970-01-01", > "2018-06-22 08:56:48,474 INFO: 9545 -- Removing container: docker-puppet-heat_api", > "2018-06-22 08:56:48,517 DEBUG: 9545 -- docker-puppet-heat_api", > "2018-06-22 08:56:48,517 INFO: 9545 -- Finished processing puppet configs for heat_api", > "2018-06-22 08:56:48,518 INFO: 9545 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:56:48,518 DEBUG: 9545 -- config_volume heat", > "2018-06-22 08:56:48,518 DEBUG: 9545 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-22 08:56:48,518 DEBUG: 9545 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-06-22 08:56:48,518 DEBUG: 9545 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:56:48,518 DEBUG: 9545 -- volumes []", > "2018-06-22 08:56:48,518 INFO: 9545 -- Removing container: docker-puppet-heat", > "2018-06-22 08:56:48,566 INFO: 9545 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:56:48,569 DEBUG: 9545 -- NET_HOST enabled", > "2018-06-22 08:56:48,570 DEBUG: 9545 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpPhCZSo:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 08:56:50,913 DEBUG: 9547 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.79 seconds", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{md5}3ddf048c6871705212f4baf1cfefd644'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{md5}647fa860739b2fc2966edcf071d44bce'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{md5}a5a47011b0d90d93073fccce60578ec1'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{md5}eeabf96eb5042b89a83b6e200a9e1507'", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone::Config/Keystone_config[ec2/driver]/ensure: created", > "Notice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}56a438a992451d2e90220df5b6ea89c4'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}4a7dc38f7247db4905cc9b0d5acf4ea3'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}f3648a02806a430f97a24c380c6a9710'", > "Notice: Applied catalog in 2.53 seconds", > " Total: 122", > " Success: 122", > " Changed: 122", > " Out of sync: 122", > " Total: 320", > " Skipped: 34", > " File: 0.39", > " Keystone config: 1.50", > " Last run: 1529657809", > " Config retrieval: 4.46", > " Total: 6.43", > " Config: 1529657802", > "Gathering files modified after 2018-06-22 08:56:38.582331432 +0000", > "2018-06-22 08:56:50,913 DEBUG: 9547 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config'", > "+ origin_of_time=/var/lib/config-data/keystone.origin_of_time", > "+ touch /var/lib/config-data/keystone.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/keystone/manifests/init.pp\", 757]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 760]:[\"/etc/config.pp\", 3]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 1108]:[\"/etc/config.pp\", 3]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/keystone", > "++ stat -c %y /var/lib/config-data/keystone.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:38.582331432 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/keystone", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/keystone", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/keystone.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/keystone --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/keystone --mtime=1970-01-01", > "2018-06-22 08:56:50,913 INFO: 9547 -- Removing container: docker-puppet-keystone", > "2018-06-22 08:56:50,959 DEBUG: 9547 -- docker-puppet-keystone", > "2018-06-22 08:56:50,959 INFO: 9547 -- Finished processing puppet configs for keystone", > "2018-06-22 08:56:50,959 INFO: 9547 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 08:56:50,959 DEBUG: 9547 -- config_volume memcached", > "2018-06-22 08:56:50,959 DEBUG: 9547 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-22 08:56:50,959 DEBUG: 9547 -- manifest include ::tripleo::profile::base::memcached", > "2018-06-22 08:56:50,959 DEBUG: 9547 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 08:56:50,959 DEBUG: 9547 -- volumes []", > "2018-06-22 08:56:50,960 INFO: 9547 -- Removing container: docker-puppet-memcached", > "2018-06-22 08:56:51,023 INFO: 9547 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 08:56:52,400 DEBUG: 9547 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-memcached ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-memcached", > "ca902f72935a: Pulling fs layer", > "ca902f72935a: Verifying Checksum", > "ca902f72935a: Download complete", > "ca902f72935a: Pull complete", > "Digest: sha256:d1285a1e78900b5c0c58e5c03f624e46f6b871ff4ffa9d972ef012568a9f1046", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 08:56:52,403 DEBUG: 9547 -- NET_HOST enabled", > "2018-06-22 08:56:52,403 DEBUG: 9547 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-memcached --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpZEAxlq:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 08:56:55,447 DEBUG: 9546 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.75 seconds", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}ae09a6f2644e4d27b63596b04680a034'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created", > "Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}d16b38fdc96aeaf6e816f7d50511c071'", > "Notice: Applied catalog in 10.13 seconds", > " Total: 180", > " Success: 180", > " Changed: 180", > " Out of sync: 180", > " Total: 501", > " Skipped: 75", > " Cron: 0.03", > " Package: 0.09", > " Total: 14.72", > " Last run: 1529657813", > " Config retrieval: 5.43", > " Nova config: 8.80", > " Config: 1529657797", > "Gathering files modified after 2018-06-22 08:56:32.861399165 +0000", > "2018-06-22 08:56:55,447 DEBUG: 9546 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova.origin_of_time", > "+ touch /var/lib/config-data/nova.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/nova/manifests/scheduler/filter.pp\", 150]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/scheduler.pp\", 32]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova", > "++ stat -c %y /var/lib/config-data/nova.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:32.861399165 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova --mtime=1970-01-01", > "2018-06-22 08:56:55,447 INFO: 9546 -- Removing container: docker-puppet-nova", > "2018-06-22 08:56:55,504 DEBUG: 9546 -- docker-puppet-nova", > "2018-06-22 08:56:55,505 INFO: 9546 -- Finished processing puppet configs for nova", > "2018-06-22 08:56:55,505 INFO: 9546 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:55,505 DEBUG: 9546 -- config_volume iscsid", > "2018-06-22 08:56:55,505 DEBUG: 9546 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-06-22 08:56:55,505 DEBUG: 9546 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-22 08:56:55,506 DEBUG: 9546 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:55,506 DEBUG: 9546 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-22 08:56:55,506 INFO: 9546 -- Removing container: docker-puppet-iscsid", > "2018-06-22 08:56:55,572 INFO: 9546 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:56,187 DEBUG: 9546 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "ab4eae34093d: Pulling fs layer", > "ab4eae34093d: Verifying Checksum", > "ab4eae34093d: Download complete", > "ab4eae34093d: Pull complete", > "Digest: sha256:a46aa93fee87b0f173118da5c2a18dc271772adb839a481ec07f2a53534ac53c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:56,190 DEBUG: 9546 -- NET_HOST enabled", > "2018-06-22 08:56:56,190 DEBUG: 9546 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpg4x59Y:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 08:56:58,330 DEBUG: 9547 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.61 seconds", > "Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}1cd26a5d46546254d21bd1aeb94b0203'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d/memcached.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: Applied catalog in 0.07 seconds", > " Total: 3", > " Success: 3", > " Skipped: 10", > " Config retrieval: 0.72", > " Total: 0.74", > " Last run: 1529657817", > " Config: 1529657816", > "Gathering files modified after 2018-06-22 08:56:52.596171756 +0000", > "2018-06-22 08:56:58,330 DEBUG: 9547 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/memcached.origin_of_time", > "+ touch /var/lib/config-data/memcached.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/memcached", > "++ stat -c %y /var/lib/config-data/memcached.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:52.596171756 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/memcached", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/memcached", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/memcached.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/memcached --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/memcached --mtime=1970-01-01", > "2018-06-22 08:56:58,330 INFO: 9547 -- Removing container: docker-puppet-memcached", > "2018-06-22 08:56:58,365 DEBUG: 9547 -- docker-puppet-memcached", > "2018-06-22 08:56:58,365 INFO: 9547 -- Finished processing puppet configs for memcached", > "2018-06-22 08:56:58,366 INFO: 9547 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 08:56:58,366 DEBUG: 9547 -- config_volume panko", > "2018-06-22 08:56:58,366 DEBUG: 9547 -- puppet_tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config", > "2018-06-22 08:56:58,366 DEBUG: 9547 -- manifest include tripleo::profile::base::panko::api", > "2018-06-22 08:56:58,366 DEBUG: 9547 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 08:56:58,366 DEBUG: 9547 -- volumes []", > "2018-06-22 08:56:58,366 INFO: 9547 -- Removing container: docker-puppet-panko", > "2018-06-22 08:56:58,440 INFO: 9547 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 08:56:59,189 DEBUG: 9545 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.22 seconds", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created", > "Notice: Applied catalog in 1.89 seconds", > " Total: 48", > " Success: 48", > " Skipped: 21", > " Total: 223", > " Out of sync: 48", > " Changed: 48", > " Heat config: 1.63", > " Config retrieval: 2.57", > " Total: 4.27", > " Config: 1529657813", > "Gathering files modified after 2018-06-22 08:56:48.762214576 +0000", > "2018-06-22 08:56:59,189 DEBUG: 9545 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat.origin_of_time", > "+ touch /var/lib/config-data/heat.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat", > "++ stat -c %y /var/lib/config-data/heat.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:48.762214576 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat --mtime=1970-01-01", > "2018-06-22 08:56:59,189 INFO: 9545 -- Removing container: docker-puppet-heat", > "2018-06-22 08:56:59,231 DEBUG: 9545 -- docker-puppet-heat", > "2018-06-22 08:56:59,231 INFO: 9545 -- Finished processing puppet configs for heat", > "2018-06-22 08:56:59,231 INFO: 9545 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 08:56:59,231 DEBUG: 9545 -- config_volume cinder", > "2018-06-22 08:56:59,231 DEBUG: 9545 -- puppet_tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line", > "2018-06-22 08:56:59,231 DEBUG: 9545 -- manifest include ::tripleo::profile::base::cinder::api", > "include ::tripleo::profile::base::cinder::backup::ceph", > "include ::tripleo::profile::base::cinder::scheduler", > "include ::tripleo::profile::base::lvm", > "2018-06-22 08:56:59,231 DEBUG: 9545 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 08:56:59,232 DEBUG: 9545 -- volumes []", > "2018-06-22 08:56:59,232 INFO: 9545 -- Removing container: docker-puppet-cinder", > "2018-06-22 08:56:59,299 INFO: 9545 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 08:57:00,974 DEBUG: 9547 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-panko-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-panko-api", > "e67be68e6dd6: Pulling fs layer", > "37e4d86c7a37: Pulling fs layer", > "37e4d86c7a37: Verifying Checksum", > "37e4d86c7a37: Download complete", > "e67be68e6dd6: Verifying Checksum", > "e67be68e6dd6: Download complete", > "e67be68e6dd6: Pull complete", > "37e4d86c7a37: Pull complete", > "Digest: sha256:af7f2810620f1617a589387bcde33173bbf96ee4d0ea85e34d70bdfd83328d21", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 08:57:00,977 DEBUG: 9547 -- NET_HOST enabled", > "2018-06-22 08:57:00,977 DEBUG: 9547 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-panko --env PUPPET_TAGS=file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config --env NAME=panko --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpgCY3lS:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 08:57:02,328 DEBUG: 9546 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.56 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 2", > " Success: 2", > " Total: 10", > " Out of sync: 2", > " Changed: 2", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.73", > " Total: 0.76", > " Last run: 1529657821", > " Config: 1529657820", > "Gathering files modified after 2018-06-22 08:56:56.387130040 +0000", > "2018-06-22 08:57:02,328 DEBUG: 9546 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:56:56.387130040 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-06-22 08:57:02,328 INFO: 9546 -- Removing container: docker-puppet-iscsid", > "2018-06-22 08:57:02,365 DEBUG: 9546 -- docker-puppet-iscsid", > "2018-06-22 08:57:02,365 INFO: 9546 -- Finished processing puppet configs for iscsid", > "2018-06-22 08:57:02,365 INFO: 9546 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 08:57:02,366 DEBUG: 9546 -- config_volume glance_api", > "2018-06-22 08:57:02,366 DEBUG: 9546 -- puppet_tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-06-22 08:57:02,366 DEBUG: 9546 -- manifest include ::tripleo::profile::base::glance::api", > "2018-06-22 08:57:02,366 DEBUG: 9546 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 08:57:02,366 DEBUG: 9546 -- volumes []", > "2018-06-22 08:57:02,366 INFO: 9546 -- Removing container: docker-puppet-glance_api", > "2018-06-22 08:57:02,432 INFO: 9546 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 08:57:07,201 DEBUG: 9545 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-api", > "5e7b63a88a76: Pulling fs layer", > "56e05018c234: Pulling fs layer", > "56e05018c234: Verifying Checksum", > "56e05018c234: Download complete", > "5e7b63a88a76: Verifying Checksum", > "5e7b63a88a76: Download complete", > "5e7b63a88a76: Pull complete", > "56e05018c234: Pull complete", > "Digest: sha256:183deb2657acebac30853e0973dad9bbf1f1f1288cff99eeb24fb4ae2fc7b1d3", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 08:57:07,204 DEBUG: 9545 -- NET_HOST enabled", > "2018-06-22 08:57:07,204 DEBUG: 9545 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-cinder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpEqkNwy:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 08:57:08,049 DEBUG: 9546 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-glance-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-glance-api", > "a5deab52212a: Pulling fs layer", > "8b31454e1757: Pulling fs layer", > "8b31454e1757: Verifying Checksum", > "8b31454e1757: Download complete", > "a5deab52212a: Verifying Checksum", > "a5deab52212a: Download complete", > "a5deab52212a: Pull complete", > "8b31454e1757: Pull complete", > "Digest: sha256:266d9d00d90cc84effdabd7cad9bea244a8fb918a029a3d2bafa4e2af9a72e77", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 08:57:08,053 DEBUG: 9546 -- NET_HOST enabled", > "2018-06-22 08:57:08,053 DEBUG: 9546 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-glance_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config --env NAME=glance_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGv4gDR:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 08:57:12,456 DEBUG: 9547 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.36 seconds", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/host]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/port]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/workers]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_api_paste_ini[pipeline:main/pipeline]/ensure: created", > "Notice: /Stage[main]/Panko::Expirer/Cron[panko-expirer]/ensure: created", > "Notice: /Stage[main]/Panko::Logging/Oslo::Log[panko_config]/Panko_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Panko::Db/Oslo::Db[panko_config]/Panko_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Panko::Policy/Oslo::Policy[panko_config]/Panko_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Oslo::Middleware[panko_config]/Panko_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}2775a863b799ced75e616d14b831dc51'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[/var/www/cgi-bin/panko]/ensure: created", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[panko_wsgi]/ensure: defined content as '{md5}e6f446b6267321fd2251a3e83021181a'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/Apache::Vhost[panko_wsgi]/Concat[10-panko_wsgi.conf]/File[/etc/httpd/conf.d/10-panko_wsgi.conf]/ensure: defined content as '{md5}4142b03bcf14936fe5263987d2d4bc3c'", > "Notice: Applied catalog in 1.15 seconds", > " Total: 101", > " Success: 101", > " Changed: 101", > " Out of sync: 101", > " Total: 255", > " Panko api paste ini: 0.00", > " Panko config: 0.19", > " File: 0.38", > " Last run: 1529657831", > " Config retrieval: 3.82", > " Total: 4.47", > " Config: 1529657826", > "Gathering files modified after 2018-06-22 08:57:01.190078065 +0000", > "2018-06-22 08:57:12,456 DEBUG: 9547 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config'", > "+ origin_of_time=/var/lib/config-data/panko.origin_of_time", > "+ touch /var/lib/config-data/panko.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko.pp\", 32]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/db.pp\", 59]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko/api.pp\", 83]", > "Warning: Scope(Class[Panko::Api]): This Class is deprecated and will be removed in future releases.", > "Warning: Scope(Class[Panko::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/panko", > "++ stat -c %y /var/lib/config-data/panko.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:01.190078065 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/panko", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/panko", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/panko.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/panko --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/panko --mtime=1970-01-01", > "2018-06-22 08:57:12,456 INFO: 9547 -- Removing container: docker-puppet-panko", > "2018-06-22 08:57:12,509 DEBUG: 9547 -- docker-puppet-panko", > "2018-06-22 08:57:12,509 INFO: 9547 -- Finished processing puppet configs for panko", > "2018-06-22 08:57:12,510 INFO: 9547 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:57:12,510 DEBUG: 9547 -- config_volume crond", > "2018-06-22 08:57:12,510 DEBUG: 9547 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-22 08:57:12,510 DEBUG: 9547 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 08:57:12,510 DEBUG: 9547 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:57:12,510 DEBUG: 9547 -- volumes []", > "2018-06-22 08:57:12,511 INFO: 9547 -- Removing container: docker-puppet-crond", > "2018-06-22 08:57:12,577 INFO: 9547 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:57:13,056 DEBUG: 9547 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:57:13,059 DEBUG: 9547 -- NET_HOST enabled", > "2018-06-22 08:57:13,059 DEBUG: 9547 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpXyjQh3:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 08:57:18,726 DEBUG: 9547 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.42 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.03 seconds", > " Skipped: 7", > " Total: 9", > " Config retrieval: 0.52", > " Total: 0.52", > " Last run: 1529657838", > " Config: 1529657837", > "Gathering files modified after 2018-06-22 08:57:13.237951895 +0000", > "2018-06-22 08:57:18,726 DEBUG: 9547 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:13.237951895 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-22 08:57:18,726 INFO: 9547 -- Removing container: docker-puppet-crond", > "2018-06-22 08:57:18,757 DEBUG: 9547 -- docker-puppet-crond", > "2018-06-22 08:57:18,757 INFO: 9547 -- Finished processing puppet configs for crond", > "2018-06-22 08:57:18,757 INFO: 9547 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 08:57:18,758 DEBUG: 9547 -- config_volume haproxy", > "2018-06-22 08:57:18,758 DEBUG: 9547 -- puppet_tags file,file_line,concat,augeas,cron,haproxy_config", > "2018-06-22 08:57:18,758 DEBUG: 9547 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "2018-06-22 08:57:18,758 DEBUG: 9547 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 08:57:18,758 DEBUG: 9547 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-06-22 08:57:18,758 INFO: 9547 -- Removing container: docker-puppet-haproxy", > "2018-06-22 08:57:18,819 INFO: 9547 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 08:57:19,580 DEBUG: 9546 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.32 seconds", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created", > "Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 2.62 seconds", > " Total: 44", > " Success: 44", > " Out of sync: 44", > " Changed: 44", > " Skipped: 59", > " Glance cache config: 0.26", > " Glance api config: 2.02", > " Config retrieval: 2.71", > " Total: 5.05", > " Config: 1529657833", > "Gathering files modified after 2018-06-22 08:57:08.237003549 +0000", > "2018-06-22 08:57:19,581 DEBUG: 9546 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config'", > "+ origin_of_time=/var/lib/config-data/glance_api.origin_of_time", > "+ touch /var/lib/config-data/glance_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/config.pp\", 48]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/glance/api.pp\", 202]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/api/db.pp\", 69]:[\"/etc/puppet/modules/glance/manifests/api.pp\", 371]", > "Warning: Unknown variable: 'default_store_real'. at /etc/puppet/modules/glance/manifests/api.pp:438:9", > "Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to http", > "Warning: Scope(Class[Glance::Api::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/glance_api", > "++ stat -c %y /var/lib/config-data/glance_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:08.237003549 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/glance_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/glance_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/glance_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/glance_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/glance_api --mtime=1970-01-01", > "2018-06-22 08:57:19,581 INFO: 9546 -- Removing container: docker-puppet-glance_api", > "2018-06-22 08:57:19,626 DEBUG: 9546 -- docker-puppet-glance_api", > "2018-06-22 08:57:19,627 INFO: 9546 -- Finished processing puppet configs for glance_api", > "2018-06-22 08:57:19,627 INFO: 9546 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 08:57:19,627 DEBUG: 9546 -- config_volume rabbitmq", > "2018-06-22 08:57:19,627 DEBUG: 9546 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-22 08:57:19,627 DEBUG: 9546 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "2018-06-22 08:57:19,627 DEBUG: 9546 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 08:57:19,627 DEBUG: 9546 -- volumes []", > "2018-06-22 08:57:19,628 INFO: 9546 -- Removing container: docker-puppet-rabbitmq", > "2018-06-22 08:57:19,705 INFO: 9546 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 08:57:22,800 DEBUG: 9547 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-haproxy ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-haproxy", > "a82042577283: Pulling fs layer", > "a82042577283: Download complete", > "a82042577283: Pull complete", > "Digest: sha256:79a7901cc6403d11b4e7f6978d7e99a1879972ccb61f430f5660695c8683d7a0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 08:57:22,803 DEBUG: 9547 -- NET_HOST enabled", > "2018-06-22 08:57:22,803 DEBUG: 9547 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-haproxy --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpSQN6Uc:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/ipa/ca.crt:/etc/ipa/ca.crt:ro --volume /etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro --volume /etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro --volume /etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 08:57:24,444 DEBUG: 9546 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-rabbitmq ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-rabbitmq", > "e603d701fd04: Pulling fs layer", > "e603d701fd04: Verifying Checksum", > "e603d701fd04: Download complete", > "e603d701fd04: Pull complete", > "Digest: sha256:4e07b8b4fd82b69e2a7ba105447776e730b0dd8fffa70a2f13c5c0e612b1ccdc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 08:57:24,447 DEBUG: 9546 -- NET_HOST enabled", > "2018-06-22 08:57:24,447 DEBUG: 9546 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-rabbitmq --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=rabbitmq --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpXcofyC:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 08:57:25,413 DEBUG: 9545 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.63 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}2a666816bb5475b8f829dfc02247c738'", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/ensure: created", > "Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/nova_catalog_info]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created", > "Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{md5}870efbe437d63cd260287cd36472d7b1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{md5}e0634aeddf30a7e69fbe6edbf61f2135'", > "Notice: Applied catalog in 5.39 seconds", > " Total: 134", > " Success: 134", > " Changed: 134", > " Out of sync: 134", > " Skipped: 36", > " Total: 374", > " File line: 0.00", > " File: 0.35", > " Augeas: 0.69", > " Total: 10.01", > " Last run: 1529657843", > " Cinder config: 3.63", > " Config retrieval: 5.26", > "Gathering files modified after 2018-06-22 08:57:07.393012367 +0000", > "2018-06-22 08:57:25,414 DEBUG: 9545 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/cinder.origin_of_time", > "+ touch /var/lib/config-data/cinder.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/api.pp\", 203]:[\"/etc/config.pp\", 2]", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_admin_info parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/cinder", > "++ stat -c %y /var/lib/config-data/cinder.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:07.393012367 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/cinder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/cinder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/cinder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/cinder --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/cinder --mtime=1970-01-01", > "2018-06-22 08:57:25,414 INFO: 9545 -- Removing container: docker-puppet-cinder", > "2018-06-22 08:57:25,468 DEBUG: 9545 -- docker-puppet-cinder", > "2018-06-22 08:57:25,468 INFO: 9545 -- Finished processing puppet configs for cinder", > "2018-06-22 08:57:25,469 INFO: 9545 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:57:25,469 DEBUG: 9545 -- config_volume swift", > "2018-06-22 08:57:25,469 DEBUG: 9545 -- puppet_tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-06-22 08:57:25,469 DEBUG: 9545 -- manifest include ::tripleo::profile::base::swift::proxy", > "include ::tripleo::profile::base::swift::storage", > "2018-06-22 08:57:25,469 DEBUG: 9545 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:57:25,469 DEBUG: 9545 -- volumes []", > "2018-06-22 08:57:25,469 INFO: 9545 -- Removing container: docker-puppet-swift", > "2018-06-22 08:57:25,519 INFO: 9545 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:57:25,522 DEBUG: 9545 -- NET_HOST enabled", > "2018-06-22 08:57:25,522 DEBUG: 9545 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpQhQZbN:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 08:57:32,353 DEBUG: 9547 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.71 seconds", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}9b8bfa47b45cd74d35cc02c53d1002ab'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.36 seconds", > " Changed: 1", > " Out of sync: 1", > " Total: 76", > " File: 0.08", > " Last run: 1529657851", > " Config retrieval: 2.96", > " Total: 3.05", > " Config: 1529657848", > "Gathering files modified after 2018-06-22 08:57:23.007853841 +0000", > "2018-06-22 08:57:32,353 DEBUG: 9547 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/haproxy", > "++ stat -c %y /var/lib/config-data/haproxy.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:23.007853841 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/haproxy", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/haproxy", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/haproxy.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/haproxy --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/haproxy --mtime=1970-01-01", > "2018-06-22 08:57:32,353 INFO: 9547 -- Removing container: docker-puppet-haproxy", > "2018-06-22 08:57:32,390 DEBUG: 9547 -- docker-puppet-haproxy", > "2018-06-22 08:57:32,390 INFO: 9547 -- Finished processing puppet configs for haproxy", > "2018-06-22 08:57:32,391 INFO: 9547 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:57:32,391 DEBUG: 9547 -- config_volume ceilometer", > "2018-06-22 08:57:32,391 DEBUG: 9547 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config", > "2018-06-22 08:57:32,391 DEBUG: 9547 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-06-22 08:57:32,391 DEBUG: 9547 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:57:32,391 DEBUG: 9547 -- volumes []", > "2018-06-22 08:57:32,391 INFO: 9547 -- Removing container: docker-puppet-ceilometer", > "2018-06-22 08:57:32,449 INFO: 9547 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:57:33,917 DEBUG: 9545 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.68 seconds", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.10:11211'", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created", > "Notice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/rsync]/ensure: defined content as '{md5}d40b899ec9278bccf0a4a3b2f0c99685'", > "Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}252b7bd0c7986c9ceb41a7dccc481918'", > "Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to 'OJ2m4Tm9Ho10GUzJVC46bPi1G'", > "Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to 'auto'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.10:11211'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed 'admin, SwiftOperator' to 'admin, swiftoperator, ResellerAdmin'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'", > "Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{md5}2698ee1a6ca83ba1e5fd163435736529'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{md5}fdce9735828d4a88b5c532a7d8a37b41'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{md5}52d9eb03bfcac911c91e17b0d4ccefad'", > "Notice: Applied catalog in 0.51 seconds", > " Total: 97", > " Success: 97", > " Total: 192", > " Skipped: 37", > " Out of sync: 97", > " Changed: 97", > " Swift config: 0.00", > " Swift keymaster config: 0.01", > " Swift object expirer config: 0.01", > " File: 0.04", > " Swift proxy config: 0.19", > " Last run: 1529657852", > " Config retrieval: 2.05", > " Total: 2.31", > " Config: 1529657850", > "Gathering files modified after 2018-06-22 08:57:25.706827409 +0000", > "2018-06-22 08:57:33,918 DEBUG: 9545 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server'", > "+ origin_of_time=/var/lib/config-data/swift.origin_of_time", > "+ touch /var/lib/config-data/swift.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 147]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 163]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 165]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > "Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56", > "Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56", > "Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56", > "Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56", > "Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release", > "Warning: Class 'xinetd' is already defined at /etc/config.pp:6; cannot redefine at /etc/puppet/modules/xinetd/manifests/init.pp:12", > "Warning: Unknown variable: 'xinetd::params::default_user'. at /etc/puppet/modules/xinetd/manifests/service.pp:110:14", > "Warning: Unknown variable: 'xinetd::params::default_group'. at /etc/puppet/modules/xinetd/manifests/service.pp:116:15", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:161:13", > "Warning: Unknown variable: 'xinetd::service_name'. at /etc/puppet/modules/xinetd/manifests/service.pp:166:24", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:167:21", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 183]:", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 197]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift", > "++ stat -c %y /var/lib/config-data/swift.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:25.706827409 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift --mtime=1970-01-01", > "2018-06-22 08:57:33,918 INFO: 9545 -- Removing container: docker-puppet-swift", > "2018-06-22 08:57:33,954 DEBUG: 9545 -- docker-puppet-swift", > "2018-06-22 08:57:33,954 INFO: 9545 -- Finished processing puppet configs for swift", > "2018-06-22 08:57:33,954 INFO: 9545 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 08:57:33,954 DEBUG: 9545 -- config_volume heat_api_cfn", > "2018-06-22 08:57:33,954 DEBUG: 9545 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-22 08:57:33,954 DEBUG: 9545 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-06-22 08:57:33,955 DEBUG: 9545 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 08:57:33,955 DEBUG: 9545 -- volumes []", > "2018-06-22 08:57:33,955 INFO: 9545 -- Removing container: docker-puppet-heat_api_cfn", > "2018-06-22 08:57:34,021 INFO: 9545 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 08:57:34,684 DEBUG: 9545 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn", > "15497368e843: Already exists", > "4089b2a1d02c: Pulling fs layer", > "4089b2a1d02c: Download complete", > "4089b2a1d02c: Pull complete", > "Digest: sha256:bbcf3cc8eeb6d8910642b40cfa9fe544a33bee49cfb4512abe49c5bf176ed8f0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 08:57:34,687 DEBUG: 9545 -- NET_HOST enabled", > "2018-06-22 08:57:34,687 DEBUG: 9545 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api_cfn --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp8ELV6d:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 08:57:34,840 DEBUG: 9547 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "333aa6b2b383: Pulling fs layer", > "1eb9ef5adcb4: Pulling fs layer", > "333aa6b2b383: Download complete", > "1eb9ef5adcb4: Verifying Checksum", > "333aa6b2b383: Pull complete", > "1eb9ef5adcb4: Pull complete", > "Digest: sha256:3f638e03aaf1d7e303183e06ff1627a5a0efeaef228a7be1e9667ae62d7d6a1b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:57:34,843 DEBUG: 9547 -- NET_HOST enabled", > "2018-06-22 08:57:34,843 DEBUG: 9547 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpg590fV:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 08:57:35,927 DEBUG: 9546 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.88 seconds", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}533d474853101f052829224cfe32a526'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}057abe3718fe43d1105e62c9ba3f0a96'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.30 seconds", > " Total: 12", > " Success: 12", > " Total: 19", > " Out of sync: 9", > " Changed: 9", > " File: 0.28", > " Config retrieval: 1.05", > " Total: 1.32", > " Last run: 1529657855", > " Config: 1529657853", > "Gathering files modified after 2018-06-22 08:57:24.629837925 +0000", > "2018-06-22 08:57:35,928 DEBUG: 9546 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/rabbitmq.origin_of_time", > "+ touch /var/lib/config-data/rabbitmq.origin_of_time", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/rabbitmq", > "++ stat -c %y /var/lib/config-data/rabbitmq.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:24.629837925 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/rabbitmq", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/rabbitmq", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/rabbitmq.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/rabbitmq --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/rabbitmq --mtime=1970-01-01", > "2018-06-22 08:57:35,928 INFO: 9546 -- Removing container: docker-puppet-rabbitmq", > "2018-06-22 08:57:35,970 DEBUG: 9546 -- docker-puppet-rabbitmq", > "2018-06-22 08:57:35,971 INFO: 9546 -- Finished processing puppet configs for rabbitmq", > "2018-06-22 08:57:35,971 INFO: 9546 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:57:35,971 DEBUG: 9546 -- config_volume neutron", > "2018-06-22 08:57:35,971 DEBUG: 9546 -- puppet_tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-22 08:57:35,971 DEBUG: 9546 -- manifest include tripleo::profile::base::neutron::server", > "include ::tripleo::profile::base::neutron::plugins::ml2", > "include tripleo::profile::base::neutron::dhcp", > "include tripleo::profile::base::neutron::l3", > "include tripleo::profile::base::neutron::metadata", > "include ::tripleo::profile::base::neutron::ovs", > "2018-06-22 08:57:35,971 DEBUG: 9546 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:57:35,971 DEBUG: 9546 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-22 08:57:35,972 INFO: 9546 -- Removing container: docker-puppet-neutron", > "2018-06-22 08:57:36,039 INFO: 9546 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:57:40,353 DEBUG: 9546 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "ea1d509b6f44: Pulling fs layer", > "e9f9993bb931: Pulling fs layer", > "e9f9993bb931: Verifying Checksum", > "e9f9993bb931: Download complete", > "ea1d509b6f44: Verifying Checksum", > "ea1d509b6f44: Download complete", > "ea1d509b6f44: Pull complete", > "e9f9993bb931: Pull complete", > "Digest: sha256:af12594500608f07f8d38590e2c9b2983e5d81ae8b63aec042f36411b0e76adc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:57:40,356 DEBUG: 9546 -- NET_HOST enabled", > "2018-06-22 08:57:40,356 DEBUG: 9546 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpu3pNXn:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 08:57:42,756 DEBUG: 9547 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.33 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/metering_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/File[event_pipeline]/ensure: defined content as '{md5}dafea5c96d5da5251f9b8a275c6d71aa'", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.61 seconds", > " Total: 31", > " Success: 31", > " Total: 158", > " Out of sync: 31", > " Changed: 31", > " Skipped: 35", > " Ceilometer config: 0.50", > " Config retrieval: 1.54", > " Last run: 1529657861", > " Total: 2.05", > " Config: 1529657859", > "Gathering files modified after 2018-06-22 08:57:35.246736167 +0000", > "2018-06-22 08:57:42,757 DEBUG: 9547 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/agent/notification.pp\", 118]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer/agent/notification.pp\", 34]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:35.246736167 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-06-22 08:57:42,757 INFO: 9547 -- Removing container: docker-puppet-ceilometer", > "2018-06-22 08:57:42,797 DEBUG: 9547 -- docker-puppet-ceilometer", > "2018-06-22 08:57:42,797 INFO: 9547 -- Finished processing puppet configs for ceilometer", > "2018-06-22 08:57:47,685 DEBUG: 9545 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.60 seconds", > "Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}9b0b061a344ae5b505fbfd36e14a487f'", > "Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}868a3a7d13367e1826461b61cf0f23b3'", > "Notice: Applied catalog in 2.40 seconds", > " Total: 337", > " File: 0.20", > " Heat config: 1.49", > " Last run: 1529657866", > " Config retrieval: 4.08", > " Total: 5.84", > "Gathering files modified after 2018-06-22 08:57:34.896739453 +0000", > "2018-06-22 08:57:47,685 DEBUG: 9545 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat_api_cfn.origin_of_time", > "+ touch /var/lib/config-data/heat_api_cfn.origin_of_time", > " with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp\", 125]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api_cfn", > "++ stat -c %y /var/lib/config-data/heat_api_cfn.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:34.896739453 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api_cfn", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api_cfn", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api_cfn.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api_cfn --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api_cfn --mtime=1970-01-01", > "2018-06-22 08:57:47,685 INFO: 9545 -- Removing container: docker-puppet-heat_api_cfn", > "2018-06-22 08:57:47,735 DEBUG: 9545 -- docker-puppet-heat_api_cfn", > "2018-06-22 08:57:47,735 INFO: 9545 -- Finished processing puppet configs for heat_api_cfn", > "2018-06-22 08:57:52,719 DEBUG: 9546 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.62 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 2.07 seconds", > " Total: 107", > " Success: 107", > " Changed: 107", > " Out of sync: 107", > " Total: 359", > " Skipped: 44", > " Neutron api config: 0.00", > " Neutron agent ovs: 0.01", > " Neutron l3 agent config: 0.02", > " Neutron metadata agent config: 0.02", > " Neutron plugin ml2: 0.03", > " Neutron dhcp agent config: 0.09", > " Augeas: 0.38", > " Neutron config: 1.25", > " Last run: 1529657871", > " Config retrieval: 4.02", > " Total: 5.86", > " Config: 1529657865", > "Gathering files modified after 2018-06-22 08:57:40.537687008 +0000", > "2018-06-22 08:57:52,719 DEBUG: 9546 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 530]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/server.pp\", 104]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 132]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/db.pp\", 69]:[\"/etc/puppet/modules/neutron/manifests/server.pp\", 315]", > "Warning: Scope(Class[Neutron::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: '::neutron::params::metadata_agent_package'. at /etc/puppet/modules/neutron/manifests/agents/metadata.pp:122:6", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:40.537687008 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-06-22 08:57:52,719 INFO: 9546 -- Removing container: docker-puppet-neutron", > "2018-06-22 08:57:52,754 DEBUG: 9546 -- docker-puppet-neutron", > "2018-06-22 08:57:52,755 INFO: 9546 -- Finished processing puppet configs for neutron", > "2018-06-22 08:57:52,755 INFO: 9546 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 08:57:52,755 DEBUG: 9546 -- config_volume horizon", > "2018-06-22 08:57:52,755 DEBUG: 9546 -- puppet_tags file,file_line,concat,augeas,cron,horizon_config", > "2018-06-22 08:57:52,755 DEBUG: 9546 -- manifest include ::tripleo::profile::base::horizon", > "2018-06-22 08:57:52,755 DEBUG: 9546 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 08:57:52,755 DEBUG: 9546 -- volumes []", > "2018-06-22 08:57:52,756 INFO: 9546 -- Removing container: docker-puppet-horizon", > "2018-06-22 08:57:52,815 INFO: 9546 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 08:57:57,888 DEBUG: 9546 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-horizon ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-horizon", > "76e0e41ffb2e: Pulling fs layer", > "76e0e41ffb2e: Download complete", > "76e0e41ffb2e: Pull complete", > "Digest: sha256:985bc1250661a931ac3368fe39a6651116c123db6c18789bfdb7da2c61741b0d", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 08:57:57,891 DEBUG: 9546 -- NET_HOST enabled", > "2018-06-22 08:57:57,892 DEBUG: 9546 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-horizon --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp69gCId:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 08:58:07,246 DEBUG: 9546 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.24 seconds", > "Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{md5}231f27030ddccda211c6456c98499d6a'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}5025b65739de577d638381df362f5d7c'", > "Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{md5}118eb7518a1d018a162d23dfe32c4bad'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}601e633104479c5b9ee828b4bae911ac' to '{md5}d48b9714c5e9a216807a24cbb02b9f8e'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/owner: owner changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{md5}7e0ab6228ac14640a145c04afd1af4f1'", > "Notice: Applied catalog in 0.69 seconds", > " Total: 86", > " Success: 86", > " Total: 172", > " Out of sync: 84", > " Changed: 84", > " Concat fragment: 0.09", > " File: 0.22", > " Last run: 1529657886", > " Config retrieval: 2.60", > " Total: 2.92", > " Config: 1529657882", > "Gathering files modified after 2018-06-22 08:57:58.089530995 +0000", > "2018-06-22 08:58:07,246 DEBUG: 9546 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,horizon_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,horizon_config'", > "+ origin_of_time=/var/lib/config-data/horizon.origin_of_time", > "+ touch /var/lib/config-data/horizon.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,horizon_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp\", 97]:[\"/etc/config.pp\", 2]", > "Warning: ModuleLoader: module 'horizon' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Undefined variable ''; ", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 559]:[\"/etc/config.pp\", 2]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 560]:[\"/etc/config.pp\", 2]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 562]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/horizon", > "++ stat -c %y /var/lib/config-data/horizon.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 08:57:58.089530995 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/horizon", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/horizon", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/horizon.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/horizon --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/horizon --mtime=1970-01-01", > "2018-06-22 08:58:07,246 INFO: 9546 -- Removing container: docker-puppet-horizon", > "2018-06-22 08:58:07,291 DEBUG: 9546 -- docker-puppet-horizon", > "2018-06-22 08:58:07,291 INFO: 9546 -- Finished processing puppet configs for horizon", > "2018-06-22 08:58:07,292 DEBUG: 9544 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-22 08:58:07,293 DEBUG: 9544 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-22 08:58:07,295 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-06-22 08:58:07,296 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-22 08:58:07,296 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-22 08:58:07,296 DEBUG: 9544 -- Updating config hash for mysql_bootstrap, config_volume=heat_api_cfn hash=711a5263a19838033c36e1f767017362", > "2018-06-22 08:58:07,296 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-22 08:58:07,296 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-22 08:58:07,297 DEBUG: 9544 -- Updating config hash for rabbitmq_bootstrap, config_volume=heat_api_cfn hash=f4f9ca7311c32107fd4be476ca19a154", > "2018-06-22 08:58:07,297 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-06-22 08:58:07,299 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-06-22 08:58:07,299 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-06-22 08:58:07,300 DEBUG: 9544 -- Updating config hash for nova_placement, config_volume=heat_api_cfn hash=b340d2f1c5fdca81cfdec41825fd92df", > "2018-06-22 08:58:07,300 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-22 08:58:07,300 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-22 08:58:07,300 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/heat/etc/heat.md5sum for config_volume /var/lib/config-data/heat/etc/heat", > "2018-06-22 08:58:07,300 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/heat/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/heat/etc/my.cnf.d", > "2018-06-22 08:58:07,300 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data.md5sum for config_volume /var/lib/config-data", > "2018-06-22 08:58:07,300 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/swift/etc", > "2018-06-22 08:58:07,300 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-22 08:58:07,300 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-22 08:58:07,301 DEBUG: 9544 -- Updating config hash for keystone_cron, config_volume=heat_api_cfn hash=50d05e1077b1755db4f0d0b03d43801a", > "2018-06-22 08:58:07,301 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/panko/etc.md5sum for config_volume /var/lib/config-data/panko/etc", > "2018-06-22 08:58:07,301 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/panko/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/panko/etc/my.cnf.d", > "2018-06-22 08:58:07,301 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-22 08:58:07,301 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-22 08:58:07,301 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-22 08:58:07,301 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-22 08:58:07,301 DEBUG: 9544 -- Updating config hash for keystone_db_sync, config_volume=heat_api_cfn hash=50d05e1077b1755db4f0d0b03d43801a", > "2018-06-22 08:58:07,301 DEBUG: 9544 -- Updating config hash for keystone, config_volume=heat_api_cfn hash=50d05e1077b1755db4f0d0b03d43801a", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/aodh/etc/aodh.md5sum for config_volume /var/lib/config-data/aodh/etc/aodh", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/aodh/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/aodh/etc/my.cnf.d", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Updating config hash for neutron_ovs_bridge, config_volume=heat_api_cfn hash=0ba9856be65f9f8dad8d5c9043f1f04e", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-22 08:58:07,302 DEBUG: 9544 -- Updating config hash for glance_api_db_sync, config_volume=heat_api_cfn hash=7f36983eded61f111947d433ed716d12", > "2018-06-22 08:58:07,303 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/neutron/etc.md5sum for config_volume /var/lib/config-data/neutron/etc", > "2018-06-22 08:58:07,303 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/neutron/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/neutron/etc/my.cnf.d", > "2018-06-22 08:58:07,303 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/neutron/usr/share.md5sum for config_volume /var/lib/config-data/neutron/usr/share", > "2018-06-22 08:58:07,303 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/sahara/etc/sahara.md5sum for config_volume /var/lib/config-data/sahara/etc/sahara", > "2018-06-22 08:58:07,303 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-06-22 08:58:07,303 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-06-22 08:58:07,303 DEBUG: 9544 -- Updating config hash for horizon, config_volume=heat_api_cfn hash=a147860cf3bb8f4c303c8d0b15b68101", > "2018-06-22 08:58:07,305 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-06-22 08:58:07,305 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-06-22 08:58:07,305 DEBUG: 9544 -- Updating config hash for clustercheck, config_volume=heat_api_cfn hash=fe797731f65665916c2ce948a89f6349", > "2018-06-22 08:58:07,305 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-22 08:58:07,305 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Updating config hash for mysql_restart_bundle, config_volume=heat_api_cfn hash=711a5263a19838033c36e1f767017362", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Updating config hash for haproxy_restart_bundle, config_volume=heat_api_cfn hash=a4fabb143c89ebd969b2af52e03e81e8", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Updating config hash for rabbitmq_restart_bundle, config_volume=heat_api_cfn hash=f4f9ca7311c32107fd4be476ca19a154", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon/etc", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-06-22 08:58:07,306 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-06-22 08:58:07,307 DEBUG: 9544 -- Updating config hash for redis_restart_bundle, config_volume=heat_api_cfn hash=7d88acf07d20b6ee937c61e7a6697675", > "2018-06-22 08:58:07,308 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,308 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,308 DEBUG: 9544 -- Updating config hash for cinder_volume_restart_bundle, config_volume=heat_api_cfn hash=8b0f400fd78d8e186beae629986ddbb0", > "2018-06-22 08:58:07,308 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 08:58:07,308 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Updating config hash for gnocchi_statsd, config_volume=heat_api_cfn hash=1b76fbb17f0eeb228887a33d547a50e3", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Updating config hash for cinder_backup_restart_bundle, config_volume=heat_api_cfn hash=8b0f400fd78d8e186beae629986ddbb0", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Updating config hash for gnocchi_metricd, config_volume=heat_api_cfn hash=1b76fbb17f0eeb228887a33d547a50e3", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-22 08:58:07,309 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/ceilometer/etc/ceilometer.md5sum for config_volume /var/lib/config-data/ceilometer/etc/ceilometer", > "2018-06-22 08:58:07,310 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 08:58:07,310 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 08:58:07,310 DEBUG: 9544 -- Updating config hash for gnocchi_api, config_volume=heat_api_cfn hash=1b76fbb17f0eeb228887a33d547a50e3", > "2018-06-22 08:58:07,311 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,311 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Updating config hash for swift_container_updater, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Updating config hash for aodh_evaluator, config_volume=heat_api_cfn hash=2689fdf92b430d93974a79035197cdfd", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Updating config hash for nova_scheduler, config_volume=heat_api_cfn hash=0329cb217ac11af45008194e84979efb", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Updating config hash for swift_object_server, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,312 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Updating config hash for cinder_api, config_volume=heat_api_cfn hash=8b0f400fd78d8e186beae629986ddbb0", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Updating config hash for swift_proxy, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Updating config hash for neutron_dhcp, config_volume=heat_api_cfn hash=0ba9856be65f9f8dad8d5c9043f1f04e", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-22 08:58:07,313 DEBUG: 9544 -- Updating config hash for heat_api, config_volume=heat_api_cfn hash=147f6d69c70a715a0d9b51a5b6cbbefb", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Updating config hash for swift_object_auditor, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Updating config hash for neutron_metadata_agent, config_volume=heat_api_cfn hash=0ba9856be65f9f8dad8d5c9043f1f04e", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Updating config hash for ceilometer_agent_central, config_volume=heat_api_cfn hash=bc336727283171bb485a9a2dccfe90d1", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Updating config hash for swift_account_replicator, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,314 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 08:58:07,315 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 08:58:07,315 DEBUG: 9544 -- Updating config hash for aodh_notifier, config_volume=heat_api_cfn hash=2689fdf92b430d93974a79035197cdfd", > "2018-06-22 08:58:07,315 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,315 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,315 DEBUG: 9544 -- Updating config hash for nova_api_cron, config_volume=heat_api_cfn hash=0329cb217ac11af45008194e84979efb", > "2018-06-22 08:58:07,315 DEBUG: 9544 -- Updating config hash for nova_consoleauth, config_volume=heat_api_cfn hash=0329cb217ac11af45008194e84979efb", > "2018-06-22 08:58:07,315 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 08:58:07,315 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Updating config hash for gnocchi_db_sync, config_volume=heat_api_cfn hash=1b76fbb17f0eeb228887a33d547a50e3", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Updating config hash for swift_account_reaper, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Updating config hash for ceilometer_agent_notification, config_volume=heat_api_cfn hash=bc336727283171bb485a9a2dccfe90d1-ee4f58d3ecac99192c4a3517af5f7d1b", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,316 DEBUG: 9544 -- Updating config hash for nova_vnc_proxy, config_volume=heat_api_cfn hash=0329cb217ac11af45008194e84979efb", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Updating config hash for swift_rsync, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Updating config hash for nova_api, config_volume=heat_api_cfn hash=0329cb217ac11af45008194e84979efb", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Updating config hash for aodh_api, config_volume=heat_api_cfn hash=2689fdf92b430d93974a79035197cdfd", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Updating config hash for nova_metadata, config_volume=heat_api_cfn hash=0329cb217ac11af45008194e84979efb", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-06-22 08:58:07,317 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Updating config hash for heat_engine, config_volume=heat_api_cfn hash=3f2095b21ffd74dff3f66e0ae1650a18", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Updating config hash for swift_container_server, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Updating config hash for swift_object_replicator, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Updating config hash for neutron_l3_agent, config_volume=heat_api_cfn hash=0ba9856be65f9f8dad8d5c9043f1f04e", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,318 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Updating config hash for cinder_scheduler, config_volume=heat_api_cfn hash=8b0f400fd78d8e186beae629986ddbb0", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Updating config hash for nova_conductor, config_volume=heat_api_cfn hash=0329cb217ac11af45008194e84979efb", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Updating config hash for heat_api_cfn, config_volume=heat_api_cfn hash=528cee6e49e5ccd1fdd0f9f20071802b", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Updating config hash for sahara_api, config_volume=heat_api_cfn hash=7d60ae93ac1d5bf44f3cbe00aeacd186", > "2018-06-22 08:58:07,319 DEBUG: 9544 -- Updating config hash for sahara_engine, config_volume=heat_api_cfn hash=7d60ae93ac1d5bf44f3cbe00aeacd186", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Updating config hash for neutron_ovs_agent, config_volume=heat_api_cfn hash=0ba9856be65f9f8dad8d5c9043f1f04e", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Updating config hash for cinder_api_cron, config_volume=heat_api_cfn hash=8b0f400fd78d8e186beae629986ddbb0", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Updating config hash for swift_account_auditor, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,320 DEBUG: 9544 -- Updating config hash for swift_container_replicator, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,321 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,321 DEBUG: 9544 -- Updating config hash for swift_object_updater, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,321 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,321 DEBUG: 9544 -- Updating config hash for swift_object_expirer, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,321 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-22 08:58:07,321 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-22 08:58:07,321 DEBUG: 9544 -- Updating config hash for heat_api_cron, config_volume=heat_api_cfn hash=147f6d69c70a715a0d9b51a5b6cbbefb", > "2018-06-22 08:58:07,321 DEBUG: 9544 -- Updating config hash for swift_container_auditor, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,321 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Updating config hash for panko_api, config_volume=heat_api_cfn hash=ee4f58d3ecac99192c4a3517af5f7d1b", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Updating config hash for aodh_listener, config_volume=heat_api_cfn hash=2689fdf92b430d93974a79035197cdfd", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Updating config hash for neutron_api, config_volume=heat_api_cfn hash=0ba9856be65f9f8dad8d5c9043f1f04e", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Updating config hash for swift_account_server, config_volume=heat_api_cfn hash=9641e100ccff8ed4018c605569e3de84", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-22 08:58:07,322 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-22 08:58:07,323 DEBUG: 9544 -- Updating config hash for glance_api, config_volume=heat_api_cfn hash=7f36983eded61f111947d433ed716d12", > "2018-06-22 08:58:07,323 DEBUG: 9544 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 08:58:07,323 DEBUG: 9544 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 08:58:07,323 DEBUG: 9544 -- Updating config hash for logrotate_crond, config_volume=heat_api_cfn hash=5dce4228d15560f77e28b10bddada6fc" > ] >} >2018-06-22 04:58:08,688 p=11115 u=mistral | TASK [Start containers for step 1] ********************************************* >2018-06-22 04:58:09,394 p=11115 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:58:09,437 p=11115 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:58:37,662 p=11115 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:58:37,690 p=11115 u=mistral | TASK [Debug output for task which failed: Start containers for step 1] ********* >2018-06-22 04:58:37,751 p=11115 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-backup ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-backup", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "5e7b63a88a76: Already exists", > "89c035649aaf: Pulling fs layer", > "89c035649aaf: Download complete", > "89c035649aaf: Pull complete", > "Digest: sha256:bbd94b3a8477e286264ef2b5660a8c60d872d945e37c6023ae19c6dd09ea156f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-volume ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-volume", > "606ec38d3d26: Pulling fs layer", > "606ec38d3d26: Download complete", > "606ec38d3d26: Pull complete", > "Digest: sha256:d4d518ef6aad7c077ff97a0ad1de70ef4074ace3ddde85fdfb70e12e63891ea5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", > "stdout: ", > "stdout: 7d795f6ba98e54205e4db2ccc19935070a71426c74a50342d92b0239ab1c3c63", > "stdout: Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...", > "OK", > "Filling help tables...", > "Creating OpenGIS required SP-s...", > "To start mysqld at boot time you have to copy", > "support-files/mysql.server to the right place for your system", > "PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !", > "To do so, start the server, then issue the following commands:", > "'/usr/bin/mysqladmin' -u root password 'new-password'", > "'/usr/bin/mysqladmin' -u root -h controller-0 password 'new-password'", > "Alternatively you can run:", > "'/usr/bin/mysql_secure_installation'", > "which will also give you the option of removing the test", > "databases and anonymous user created by default. This is", > "strongly recommended for production servers.", > "See the MariaDB Knowledgebase at http://mariadb.com/kb or the", > "MySQL manual for more instructions.", > "You can start the MariaDB daemon with:", > "cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'", > "You can test the MariaDB daemon with mysql-test-run.pl", > "cd '/usr/mysql-test' ; perl mysql-test-run.pl", > "Please report any problems at http://mariadb.org/jira", > "The latest information about MariaDB is available at http://mariadb.org/.", > "You can find additional information about the MySQL part at:", > "http://dev.mysql.com", > "Consider joining MariaDB's strong and vibrant community:", > "https://mariadb.org/get-involved/", > "180622 08:58:28 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180622 08:58:28 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "spawn mysql_secure_installation", > "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB", > " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", > "In order to log into MariaDB to secure it, we'll need the current", > "password for the root user. If you've just installed MariaDB, and", > "you haven't set the root password yet, the password will be blank,", > "so you should just press enter here.", > "Enter current password for root (enter for none): ", > "OK, successfully used password, moving on...", > "Setting the root password ensures that nobody can log into the MariaDB", > "root user without the proper authorisation.", > "Set root password? [Y/n] y", > "New password: ", > "Re-enter new password: ", > "Password updated successfully!", > "Reloading privilege tables..", > " ... Success!", > "By default, a MariaDB installation has an anonymous user, allowing anyone", > "to log into MariaDB without having to have a user account created for", > "them. This is intended only for testing, and to make the installation", > "go a bit smoother. You should remove them before moving into a", > "production environment.", > "Remove anonymous users? [Y/n] y", > "Normally, root should only be allowed to connect from 'localhost'. This", > "ensures that someone cannot guess at the root password from the network.", > "Disallow root login remotely? [Y/n] n", > " ... skipping.", > "By default, MariaDB comes with a database named 'test' that anyone can", > "access. This is also intended only for testing, and should be removed", > "before moving into a production environment.", > "Remove test database and access to it? [Y/n] y", > " - Dropping test database...", > " - Removing privileges on test database...", > "Reloading the privilege tables will ensure that all changes made so far", > "will take effect immediately.", > "Reload privilege tables now? [Y/n] y", > "Cleaning up...", > "All done! If you've completed all of the above steps, your MariaDB", > "installation should now be secure.", > "Thanks for using MariaDB!", > "180622 08:58:31 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "180622 08:58:32 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180622 08:58:32 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "mysqld is alive", > "180622 08:58:35 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "stderr: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Copying /dev/null to /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Setting permission for /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Deleting /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/galera.cnf to /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sysconfig/clustercheck to /etc/sysconfig/clustercheck", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/root/.my.cnf to /root/.my.cnf", > "INFO:__main__:Writing out command to execute", > "2018-06-22 8:58:15 140270148200640 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-22 8:58:15 140270148200640 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 42 ...", > "2018-06-22 8:58:20 140193535609024 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-22 8:58:20 140193535609024 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 71 ...", > "2018-06-22 8:58:24 140017885132992 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-22 8:58:24 140017885132992 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 101 ...", > "/usr/bin/mysqld_safe: line 755: ulimit: -1: invalid option", > "ulimit: usage: ulimit [-SHacdefilmnpqrstuvx] [limit]", > "stdout: c27e3b63a5c85e5fec091c3c7bcc93330579187103d989166691e828df158f16" > ] >} >2018-06-22 04:58:37,763 p=11115 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-22 04:58:37,790 p=11115 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-22 04:58:37,811 p=11115 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks1.json exists] ******** >2018-06-22 04:58:38,189 p=11115 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:58:38,196 p=11115 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:58:38,234 p=11115 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 04:58:38,259 p=11115 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 1] ******************** >2018-06-22 04:58:38,315 p=11115 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:58:38,320 p=11115 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:58:38,329 p=11115 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 04:58:38,352 p=11115 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 1] *** >2018-06-22 04:58:38,379 p=11115 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,404 p=11115 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,416 p=11115 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,421 p=11115 u=mistral | PLAY [External deployment step 2] ********************************************** >2018-06-22 04:58:38,442 p=11115 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-22 04:58:38,459 p=11115 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,476 p=11115 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-22 04:58:38,500 p=11115 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,503 p=11115 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,508 p=11115 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,524 p=11115 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-22 04:58:38,541 p=11115 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,560 p=11115 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-22 04:58:38,581 p=11115 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,598 p=11115 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-22 04:58:38,614 p=11115 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,632 p=11115 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-22 04:58:38,649 p=11115 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,664 p=11115 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-22 04:58:38,688 p=11115 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,712 p=11115 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-22 04:58:38,731 p=11115 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 04:58:38,748 p=11115 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-22 04:58:38,775 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbook_verbosity": 2}, "changed": false} >2018-06-22 04:58:38,792 p=11115 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-22 04:58:38,829 p=11115 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_command": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_RETRY_FILES_ENABLED=False ANSIBLE_SSH_RETRIES=3 ANSIBLE_HOST_KEY_CHECKING=False DEFAULT_FORKS=25 ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ansible-playbook --private-key /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/extra_vars.yml"}, "changed": false} >2018-06-22 04:58:38,846 p=11115 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-22 04:59:47,680 p=11115 u=mistral | failed: [undercloud] (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": true, "cmd": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_RETRY_FILES_ENABLED=False ANSIBLE_SSH_RETRIES=3 ANSIBLE_HOST_KEY_CHECKING=False DEFAULT_FORKS=25 ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ansible-playbook --private-key /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/extra_vars.yml /usr/share/ceph-ansible/site-docker.yml.sample", "delta": "0:01:08.580228", "end": "2018-06-22 04:59:47.600578", "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "msg": "non-zero return code", "rc": 2, "start": "2018-06-22 04:58:39.020350", "stderr": "[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \nThis feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is \ndiscouraged. The module documentation details page may explain more about this \nrationale.. This feature will be removed in a future release. Deprecation \nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Could not match supplied host pattern, ignoring: agents\n [WARNING]: Could not match supplied host pattern, ignoring: mdss\n [WARNING]: Could not match supplied host pattern, ignoring: rgws\n [WARNING]: Could not match supplied host pattern, ignoring: nfss\n [WARNING]: Could not match supplied host pattern, ignoring: restapis\n [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors\n [WARNING]: Could not match supplied host pattern, ignoring: iscsigws\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.", "stderr_lines": ["[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use ", "'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. ", "This feature will be removed in a future release. Deprecation warnings can be ", "disabled by setting deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is ", "discouraged. The module documentation details page may explain more about this ", "rationale.. This feature will be removed in a future release. Deprecation ", "warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.", " [WARNING]: Could not match supplied host pattern, ignoring: agents", " [WARNING]: Could not match supplied host pattern, ignoring: mdss", " [WARNING]: Could not match supplied host pattern, ignoring: rgws", " [WARNING]: Could not match supplied host pattern, ignoring: nfss", " [WARNING]: Could not match supplied host pattern, ignoring: restapis", " [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors", " [WARNING]: Could not match supplied host pattern, ignoring: iscsigws", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg."], "stdout": "ansible-playbook 2.5.4\n config file = /usr/share/ceph-ansible/ansible.cfg\n configured module search path = [u'/usr/share/ceph-ansible/library']\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\n executable location = /usr/bin/ansible-playbook\n python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]\nUsing /usr/share/ceph-ansible/ansible.cfg as config file\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/openstack_config.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/create_mds_filesystems.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/selinux.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/selinux.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml\n\nPLAYBOOK: site-docker.yml.sample ***********************************************\n12 plays in /usr/share/ceph-ansible/site-docker.yml.sample\n\nPLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***\n\nTASK [gather facts] ************************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:24\nFriday 22 June 2018 04:58:41 -0400 (0:00:00.140) 0:00:00.140 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nok: [compute-0]\n\nTASK [gather and delegate facts] ***********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:29\nFriday 22 June 2018 04:58:44 -0400 (0:00:03.286) 0:00:03.427 *********** \nok: [controller-0 -> 192.168.24.13] => (item=ceph-0)\nok: [ceph-0 -> 192.168.24.13] => (item=ceph-0)\nok: [compute-0 -> 192.168.24.13] => (item=ceph-0)\nok: [controller-0 -> 192.168.24.12] => (item=controller-0)\nok: [compute-0 -> 192.168.24.12] => (item=controller-0)\nok: [ceph-0 -> 192.168.24.12] => (item=controller-0)\n\nTASK [check if it is atomic host] **********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:37\nFriday 22 June 2018 04:58:51 -0400 (0:00:06.997) 0:00:10.425 *********** \nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [set_fact is_atomic] ******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:44\nFriday 22 June 2018 04:58:52 -0400 (0:00:00.685) 0:00:11.110 *********** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nTASK [pull rhceph image] *******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:65\nFriday 22 June 2018 04:58:52 -0400 (0:00:00.152) 0:00:11.263 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:75\nFriday 22 June 2018 04:58:52 -0400 (0:00:00.112) 0:00:11.375 *********** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180622045852Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nFriday 22 June 2018 04:58:52 -0400 (0:00:00.161) 0:00:11.537 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029870\", \"end\": \"2018-06-22 08:58:53.560953\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:58:53.531083\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nFriday 22 June 2018 04:58:53 -0400 (0:00:00.645) 0:00:12.183 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nFriday 22 June 2018 04:58:53 -0400 (0:00:00.043) 0:00:12.226 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nFriday 22 June 2018 04:58:53 -0400 (0:00:00.045) 0:00:12.271 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nFriday 22 June 2018 04:58:53 -0400 (0:00:00.043) 0:00:12.315 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.028308\", \"end\": \"2018-06-22 08:58:54.239037\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:58:54.210729\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.544) 0:00:12.859 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.053) 0:00:12.913 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.056) 0:00:12.970 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.047) 0:00:13.018 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.048) 0:00:13.066 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.045) 0:00:13.111 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.044) 0:00:13.156 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.044) 0:00:13.201 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.042) 0:00:13.243 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.043) 0:00:13.287 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.045) 0:00:13.332 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.043) 0:00:13.375 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.042) 0:00:13.417 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.042) 0:00:13.459 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.041) 0:00:13.501 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.045) 0:00:13.546 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nFriday 22 June 2018 04:58:54 -0400 (0:00:00.044) 0:00:13.591 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nFriday 22 June 2018 04:58:55 -0400 (0:00:00.042) 0:00:13.633 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nFriday 22 June 2018 04:58:55 -0400 (0:00:00.045) 0:00:13.679 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nFriday 22 June 2018 04:58:55 -0400 (0:00:00.044) 0:00:13.723 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nFriday 22 June 2018 04:58:55 -0400 (0:00:00.047) 0:00:13.770 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nFriday 22 June 2018 04:58:55 -0400 (0:00:00.046) 0:00:13.817 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nFriday 22 June 2018 04:58:55 -0400 (0:00:00.045) 0:00:13.863 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nFriday 22 June 2018 04:58:55 -0400 (0:00:00.045) 0:00:13.909 *********** \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nFriday 22 June 2018 04:58:55 -0400 (0:00:00.600) 0:00:14.509 *********** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nFriday 22 June 2018 04:58:56 -0400 (0:00:00.249) 0:00:14.759 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nFriday 22 June 2018 04:58:56 -0400 (0:00:00.072) 0:00:14.831 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nFriday 22 June 2018 04:58:56 -0400 (0:00:00.069) 0:00:14.901 *********** \nok: [controller-0 -> 192.168.24.12] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nFriday 22 June 2018 04:58:56 -0400 (0:00:00.133) 0:00:15.034 *********** \nok: [controller-0 -> 192.168.24.12] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.031808\", \"end\": \"2018-06-22 08:58:56.972239\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-06-22 08:58:56.940431\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check if /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nFriday 22 June 2018 04:58:56 -0400 (0:00:00.566) 0:00:15.600 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.194) 0:00:15.795 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.052) 0:00:15.847 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.294) 0:00:16.142 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.044) 0:00:16.186 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.068) 0:00:16.255 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.048) 0:00:16.304 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.048) 0:00:16.352 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.045) 0:00:16.398 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.039) 0:00:16.438 *********** \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.071) 0:00:16.509 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.037) 0:00:16.547 *********** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nFriday 22 June 2018 04:58:57 -0400 (0:00:00.069) 0:00:16.617 *********** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nFriday 22 June 2018 04:58:58 -0400 (0:00:00.072) 0:00:16.689 *********** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nFriday 22 June 2018 04:58:58 -0400 (0:00:00.071) 0:00:16.760 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nFriday 22 June 2018 04:58:58 -0400 (0:00:00.045) 0:00:16.806 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nFriday 22 June 2018 04:58:58 -0400 (0:00:00.048) 0:00:16.855 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for Debian based system] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nFriday 22 June 2018 04:58:58 -0400 (0:00:00.045) 0:00:16.900 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for Red Hat based system] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nFriday 22 June 2018 04:58:58 -0400 (0:00:00.048) 0:00:16.949 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for Red Hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nFriday 22 June 2018 04:58:58 -0400 (0:00:00.048) 0:00:16.997 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : check if selinux is enabled] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nFriday 22 June 2018 04:58:58 -0400 (0:00:00.074) 0:00:17.072 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"getenforce\"], \"delta\": \"0:00:00.003692\", \"end\": \"2018-06-22 08:58:58.970770\", \"rc\": 0, \"start\": \"2018-06-22 08:58:58.967078\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Enforcing\", \"stdout_lines\": [\"Enforcing\"]}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nFriday 22 June 2018 04:58:58 -0400 (0:00:00.521) 0:00:17.593 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nFriday 22 June 2018 04:58:59 -0400 (0:00:00.043) 0:00:17.637 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nFriday 22 June 2018 04:58:59 -0400 (0:00:00.057) 0:00:17.694 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nFriday 22 June 2018 04:58:59 -0400 (0:00:00.049) 0:00:17.743 *********** \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nFriday 22 June 2018 04:59:00 -0400 (0:00:00.967) 0:00:18.711 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nFriday 22 June 2018 04:59:00 -0400 (0:00:00.073) 0:00:18.784 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nFriday 22 June 2018 04:59:00 -0400 (0:00:00.039) 0:00:18.824 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.028573\", \"end\": \"2018-06-22 08:59:00.739996\", \"rc\": 0, \"start\": \"2018-06-22 08:59:00.711423\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nFriday 22 June 2018 04:59:00 -0400 (0:00:00.542) 0:00:19.366 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nFriday 22 June 2018 04:59:00 -0400 (0:00:00.069) 0:00:19.436 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029386\", \"end\": \"2018-06-22 08:59:01.362399\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:59:01.333013\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nFriday 22 June 2018 04:59:01 -0400 (0:00:00.551) 0:00:19.987 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nFriday 22 June 2018 04:59:01 -0400 (0:00:00.081) 0:00:20.068 *********** \nok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nFriday 22 June 2018 04:59:01 -0400 (0:00:00.120) 0:00:20.189 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nFriday 22 June 2018 04:59:01 -0400 (0:00:00.082) 0:00:20.272 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nFriday 22 June 2018 04:59:01 -0400 (0:00:00.088) 0:00:20.361 *********** \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nFriday 22 June 2018 04:59:02 -0400 (0:00:01.179) 0:00:21.541 *********** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.248) 0:00:21.790 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.043) 0:00:21.833 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.039) 0:00:21.873 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.045) 0:00:21.919 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.044) 0:00:21.963 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.047) 0:00:22.010 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.050) 0:00:22.061 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.042) 0:00:22.104 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.044) 0:00:22.148 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.047) 0:00:22.195 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.041) 0:00:22.237 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.043) 0:00:22.280 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.142) 0:00:22.422 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.052) 0:00:22.475 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.046) 0:00:22.521 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.042) 0:00:22.564 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nFriday 22 June 2018 04:59:03 -0400 (0:00:00.049) 0:00:22.614 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.048) 0:00:22.663 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.043) 0:00:22.706 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.041) 0:00:22.748 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.046) 0:00:22.794 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.042) 0:00:22.837 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.044) 0:00:22.881 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.045) 0:00:22.927 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.043) 0:00:22.970 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.046) 0:00:23.017 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.043) 0:00:23.060 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.048) 0:00:23.108 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.050) 0:00:23.159 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nFriday 22 June 2018 04:59:04 -0400 (0:00:00.045) 0:00:23.204 *********** \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.088112\", \"end\": \"2018-06-22 08:59:21.173416\", \"rc\": 0, \"start\": \"2018-06-22 08:59:05.085304\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nFriday 22 June 2018 04:59:21 -0400 (0:00:16.599) 0:00:39.804 *********** \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.031880\", \"end\": \"2018-06-22 08:59:21.735819\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:59:21.703939\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/4f400cee0cf5241b5500dfa5fc0ba4b0bf6b1d4756555cc7e07b19c2af9fb12b/diff:/var/lib/docker/overlay2/e7ceb7c5f142ee0ead3760ed5e37988896c004d9442b29434db4f7afb4c18364/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/4f400cee0cf5241b5500dfa5fc0ba4b0bf6b1d4756555cc7e07b19c2af9fb12b/diff:/var/lib/docker/overlay2/e7ceb7c5f142ee0ead3760ed5e37988896c004d9442b29434db4f7afb4c18364/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nFriday 22 June 2018 04:59:21 -0400 (0:00:00.568) 0:00:40.373 *********** \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nFriday 22 June 2018 04:59:21 -0400 (0:00:00.081) 0:00:40.454 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nFriday 22 June 2018 04:59:21 -0400 (0:00:00.049) 0:00:40.503 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nFriday 22 June 2018 04:59:21 -0400 (0:00:00.047) 0:00:40.551 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nFriday 22 June 2018 04:59:21 -0400 (0:00:00.046) 0:00:40.598 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nFriday 22 June 2018 04:59:22 -0400 (0:00:00.044) 0:00:40.642 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nFriday 22 June 2018 04:59:22 -0400 (0:00:00.047) 0:00:40.689 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nFriday 22 June 2018 04:59:22 -0400 (0:00:00.050) 0:00:40.740 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nFriday 22 June 2018 04:59:22 -0400 (0:00:00.048) 0:00:40.788 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nFriday 22 June 2018 04:59:22 -0400 (0:00:00.047) 0:00:40.836 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nFriday 22 June 2018 04:59:22 -0400 (0:00:00.048) 0:00:40.884 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nFriday 22 June 2018 04:59:22 -0400 (0:00:00.046) 0:00:40.931 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nFriday 22 June 2018 04:59:22 -0400 (0:00:00.053) 0:00:40.984 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.550892\", \"end\": \"2018-06-22 08:59:23.429552\", \"rc\": 0, \"start\": \"2018-06-22 08:59:22.878660\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nFriday 22 June 2018 04:59:23 -0400 (0:00:01.072) 0:00:42.057 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nFriday 22 June 2018 04:59:23 -0400 (0:00:00.073) 0:00:42.130 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nFriday 22 June 2018 04:59:23 -0400 (0:00:00.047) 0:00:42.177 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nFriday 22 June 2018 04:59:23 -0400 (0:00:00.044) 0:00:42.222 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nFriday 22 June 2018 04:59:23 -0400 (0:00:00.165) 0:00:42.388 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nFriday 22 June 2018 04:59:23 -0400 (0:00:00.046) 0:00:42.434 *********** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nFriday 22 June 2018 04:59:26 -0400 (0:00:02.353) 0:00:44.788 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nFriday 22 June 2018 04:59:26 -0400 (0:00:00.050) 0:00:44.839 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nFriday 22 June 2018 04:59:26 -0400 (0:00:00.049) 0:00:44.888 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nFriday 22 June 2018 04:59:26 -0400 (0:00:00.198) 0:00:45.086 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nFriday 22 June 2018 04:59:26 -0400 (0:00:00.049) 0:00:45.135 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 22 June 2018 04:59:26 -0400 (0:00:00.043) 0:00:45.179 *********** \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 22 June 2018 04:59:27 -0400 (0:00:00.622) 0:00:45.801 *********** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"41c6f67e44237551a124af9a3133eb853ff83536\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"cb7deae369635e38668226c253d90026\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 664, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657967.33-194501377654424/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nFriday 22 June 2018 04:59:30 -0400 (0:00:03.297) 0:00:49.099 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.047) 0:00:49.147 *********** \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.073) 0:00:49.220 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : make sure pg num is set for cephfs pools] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:10\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.051) 0:00:49.271 *********** \nskipping: [controller-0] => (item={u'name': u'cephfs_data', u'pgs': u''}) => {\"changed\": false, \"item\": {\"name\": \"cephfs_data\", \"pgs\": \"\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'name': u'cephfs_metadata', u'pgs': u''}) => {\"changed\": false, \"item\": {\"name\": \"cephfs_metadata\", \"pgs\": \"\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate monitor initial keyring] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.060) 0:00:49.331 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : read monitor initial keyring if it already exists] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.051) 0:00:49.383 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create monitor initial keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.043) 0:00:49.426 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set initial monitor key permissions] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.044) 0:00:49.471 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create (and fix ownership of) monitor directory] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.047) 0:00:49.518 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.042) 0:00:49.561 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63\nFriday 22 June 2018 04:59:30 -0400 (0:00:00.044) 0:00:49.605 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create custom admin keyring] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.052) 0:00:49.658 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set ownership of admin keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:49.703 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : import admin keyring into mon keyring] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.044) 0:00:49.747 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs with keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:49.793 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs without keyring] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.044) 0:00:49.837 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.049) 0:00:49.887 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add ceph-mon systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:49.932 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : start the monitor service] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.044) 0:00:49.977 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : enable the ceph-mon.target service] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:50.023 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : include ceph_keys.yml] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:50.068 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : collect all the pools] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.043) 0:00:50.111 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : secure the cluster] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.044) 0:00:50.156 *********** \n\nTASK [ceph-mon : set_fact ceph_config_keys] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.047) 0:00:50.204 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : register rbd bootstrap key] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:12\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.172) 0:00:50.376 *********** \nok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:18\nFriday 22 June 2018 04:59:31 -0400 (0:00:00.170) 0:00:50.547 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:23\nFriday 22 June 2018 04:59:32 -0400 (0:00:00.174) 0:00:50.721 *********** \nok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-mon : set_fact ceph_mgr_keys convert mgr keys to an array] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:31\nFriday 22 June 2018 04:59:32 -0400 (0:00:00.269) 0:00:50.990 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:37\nFriday 22 June 2018 04:59:32 -0400 (0:00:00.078) 0:00:51.069 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : stat for ceph config and keys] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:43\nFriday 22 June 2018 04:59:32 -0400 (0:00:00.079) 0:00:51.149 *********** \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-mon : try to copy ceph keys] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:54\nFriday 22 June 2018 04:59:33 -0400 (0:00:01.147) 0:00:52.296 *********** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : try to copy ceph config] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:68\nFriday 22 June 2018 04:59:33 -0400 (0:00:00.149) 0:00:52.446 *********** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set selinux permissions] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:83\nFriday 22 June 2018 04:59:33 -0400 (0:00:00.141) 0:00:52.588 *********** \nok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"cmd\": \"chcon -Rt svirt_sandbox_file_t /etc/ceph\", \"delta\": \"0:00:00.005636\", \"end\": \"2018-06-22 08:59:34.512006\", \"item\": \"/etc/ceph\", \"rc\": 0, \"start\": \"2018-06-22 08:59:34.506370\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [controller-0] => (item=/var/lib/ceph) => {\"changed\": false, \"cmd\": \"chcon -Rt svirt_sandbox_file_t /var/lib/ceph\", \"delta\": \"0:00:00.005283\", \"end\": \"2018-06-22 08:59:34.956030\", \"item\": \"/var/lib/ceph\", \"rc\": 0, \"start\": \"2018-06-22 08:59:34.950747\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-mon : populate kv_store with default ceph.conf] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2\nFriday 22 June 2018 04:59:34 -0400 (0:00:00.986) 0:00:53.574 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with custom ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18\nFriday 22 June 2018 04:59:35 -0400 (0:00:00.053) 0:00:53.627 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : delete populate-kv-store docker] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36\nFriday 22 June 2018 04:59:35 -0400 (0:00:00.051) 0:00:53.679 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43\nFriday 22 June 2018 04:59:35 -0400 (0:00:00.045) 0:00:53.725 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"389416528a79daff9f46e3b26fae4605355acb8e\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"9619471e5e05d96278e92bd89fa78172\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 794, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657975.15-190521990697669/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : systemd start mon container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54\nFriday 22 June 2018 04:59:37 -0400 (0:00:02.878) 0:00:56.604 *********** \nok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service basic.target system-ceph\\\\x5cx2dmon.slice systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph -v /etc/ceph:/etc/ceph -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.11 -e CLUSTER=ceph -e FSID=53912472-747b-11e8-95a3-5254003d7dcb -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mon : configure ceph profile.d aliases] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2\nFriday 22 June 2018 04:59:38 -0400 (0:00:00.957) 0:00:57.561 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657978.98-217864557131037/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : wait for monitor socket to exist] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12\nFriday 22 June 2018 04:59:41 -0400 (0:00:02.897) 0:01:00.458 *********** \nchanged: [controller-0] => {\"attempts\": 1, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.089936\", \"end\": \"2018-06-22 08:59:42.437535\", \"rc\": 0, \"start\": \"2018-06-22 08:59:42.347599\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 33h/51d\\tInode: 96802491 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-06-22 08:59:40.510805580 +0000\\nModify: 2018-06-22 08:59:40.510805580 +0000\\nChange: 2018-06-22 08:59:40.510805580 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 33h/51d\\tInode: 96802491 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-06-22 08:59:40.510805580 +0000\", \"Modify: 2018-06-22 08:59:40.510805580 +0000\", \"Change: 2018-06-22 08:59:40.510805580 +0000\", \" Birth: -\"]}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19\nFriday 22 June 2018 04:59:42 -0400 (0:00:00.608) 0:01:01.067 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29\nFriday 22 June 2018 04:59:42 -0400 (0:00:00.086) 0:01:01.154 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39\nFriday 22 June 2018 04:59:42 -0400 (0:00:00.085) 0:01:01.240 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.11\"], \"delta\": \"0:00:00.181632\", \"end\": \"2018-06-22 08:59:43.399533\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:59:43.217901\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49\nFriday 22 June 2018 04:59:43 -0400 (0:00:00.784) 0:01:02.024 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59\nFriday 22 June 2018 04:59:43 -0400 (0:00:00.052) 0:01:02.077 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69\nFriday 22 June 2018 04:59:43 -0400 (0:00:00.053) 0:01:02.131 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : push ceph files to the ansible server] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2\nFriday 22 June 2018 04:59:43 -0400 (0:00:00.055) 0:01:02.186 *********** \nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"c11ffdd0583ec38685e73e9e23806b1ab1796f83\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"8521e5c370de28dd196a5547e194df11\", \"remote_checksum\": \"c11ffdd0583ec38685e73e9e23806b1ab1796f83\", \"remote_md5sum\": null}\nfailed: [controller-0] (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"msg\": \"file not found: /etc/ceph/monmap-ceph\"}\nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"92ab17613a97ae7cc44b346dae569b9621cfa58c\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"b7f24f6f4dd819f255a696a198039699\", \"remote_checksum\": \"92ab17613a97ae7cc44b346dae569b9621cfa58c\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"7bc2403b825b6dc38c05550b562edae21a8a427c\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"aae8af10d53876571689885523700ac5\", \"remote_checksum\": \"7bc2403b825b6dc38c05550b562edae21a8a427c\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"7fcd04c801012ba2ce36cd691ad1f9d78e601ce7\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"7cf75a1195ec902e4c83476d421838bd\", \"remote_checksum\": \"7fcd04c801012ba2ce36cd691ad1f9d78e601ce7\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"a9c3167c61374bc5b2700218dc7da7ef3586d25a\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"e813210a29326b4c8491bb41f44b3ac9\", \"remote_checksum\": \"a9c3167c61374bc5b2700218dc7da7ef3586d25a\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"bc3d34502a665ea005f6da312409602a2e167f1b\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"a185e79c17052dc0692c1fedd879676c\", \"remote_checksum\": \"bc3d34502a665ea005f6da312409602a2e167f1b\", \"remote_md5sum\": null}\nfailed: [controller-0] (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"msg\": \"file not found: /etc/ceph/ceph.mgr.controller-0.keyring\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nFriday 22 June 2018 04:59:47 -0400 (0:00:03.892) 0:01:06.079 *********** \n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.079 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.079 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.080 *********** \n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.080 *********** \n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.081 *********** \n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.081 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.081 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.082 *********** \n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.083 *********** \n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.083 *********** \n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.083 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.084 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.084 *********** \n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.084 *********** \n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.085 *********** \n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.085 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.086 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.086 *********** \n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.086 *********** \n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.087 *********** \n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.087 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.087 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.088 *********** \n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.088 *********** \n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.088 *********** \n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.089 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.089 *********** \n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.089 *********** \n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.090 *********** \n\nPLAY RECAP *********************************************************************\nceph-0 : ok=3 changed=0 unreachable=0 failed=0 \ncompute-0 : ok=4 changed=0 unreachable=0 failed=0 \ncontroller-0 : ok=54 changed=7 unreachable=0 failed=1 \n\n\nINSTALLER STATUS ***************************************************************\nInstall Ceph Monitor : In Progress (0:00:55)\n\tThis phase can be restarted by running: roles/ceph-mon/tasks/main.yml\n\nFriday 22 June 2018 04:59:47 -0400 (0:00:00.004) 0:01:06.095 *********** \n=============================================================================== \nceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 16.60s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----\ngather and delegate facts ----------------------------------------------- 7.00s\n/usr/share/ceph-ansible/site-docker.yml.sample:29 -----------------------------\nceph-mon : push ceph files to the ansible server ------------------------ 3.89s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2 -------\nceph-config : generate ceph.conf configuration file --------------------- 3.30s\n/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------\ngather facts ------------------------------------------------------------ 3.29s\n/usr/share/ceph-ansible/site-docker.yml.sample:24 -----------------------------\nceph-mon : configure ceph profile.d aliases ----------------------------- 2.90s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2 \nceph-mon : generate systemd unit file ----------------------------------- 2.88s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43 \nceph-docker-common : create bootstrap directories ----------------------- 2.35s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2 -\nceph-docker-common : stat for ceph config and keys ---------------------- 1.18s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30 -\nceph-mon : stat for ceph config and keys -------------------------------- 1.15s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:43 -------\nceph-docker-common : get ceph version ----------------------------------- 1.07s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84 ------------\nceph-mon : set selinux permissions -------------------------------------- 0.99s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:83 -------\nceph-docker-common : remove ceph udev rules ----------------------------- 0.97s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2 \nceph-mon : systemd start mon container ---------------------------------- 0.96s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54 \nceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block --- 0.78s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39 ---------------\ncheck if it is atomic host ---------------------------------------------- 0.69s\n/usr/share/ceph-ansible/site-docker.yml.sample:37 -----------------------------\nceph-defaults : check for a mon container ------------------------------- 0.65s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2 \nceph-config : ensure /etc/ceph exists ----------------------------------- 0.62s\n/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76 -------------------\nceph-mon : wait for monitor socket to exist ----------------------------- 0.61s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12 ---------------\nceph-defaults : check if it is atomic host ------------------------------ 0.60s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2 -----------------", "stdout_lines": ["ansible-playbook 2.5.4", " config file = /usr/share/ceph-ansible/ansible.cfg", " configured module search path = [u'/usr/share/ceph-ansible/library']", " ansible python module location = /usr/lib/python2.7/site-packages/ansible", " executable location = /usr/bin/ansible-playbook", " python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]", "Using /usr/share/ceph-ansible/ansible.cfg as config file", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/openstack_config.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/create_mds_filesystems.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/selinux.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/selinux.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml", "", "PLAYBOOK: site-docker.yml.sample ***********************************************", "12 plays in /usr/share/ceph-ansible/site-docker.yml.sample", "", "PLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***", "", "TASK [gather facts] ************************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:24", "Friday 22 June 2018 04:58:41 -0400 (0:00:00.140) 0:00:00.140 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "ok: [compute-0]", "", "TASK [gather and delegate facts] ***********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:29", "Friday 22 June 2018 04:58:44 -0400 (0:00:03.286) 0:00:03.427 *********** ", "ok: [controller-0 -> 192.168.24.13] => (item=ceph-0)", "ok: [ceph-0 -> 192.168.24.13] => (item=ceph-0)", "ok: [compute-0 -> 192.168.24.13] => (item=ceph-0)", "ok: [controller-0 -> 192.168.24.12] => (item=controller-0)", "ok: [compute-0 -> 192.168.24.12] => (item=controller-0)", "ok: [ceph-0 -> 192.168.24.12] => (item=controller-0)", "", "TASK [check if it is atomic host] **********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:37", "Friday 22 June 2018 04:58:51 -0400 (0:00:06.997) 0:00:10.425 *********** ", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [set_fact is_atomic] ******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:44", "Friday 22 June 2018 04:58:52 -0400 (0:00:00.685) 0:00:11.110 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "TASK [pull rhceph image] *******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:65", "Friday 22 June 2018 04:58:52 -0400 (0:00:00.152) 0:00:11.263 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:75", "Friday 22 June 2018 04:58:52 -0400 (0:00:00.112) 0:00:11.375 *********** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180622045852Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Friday 22 June 2018 04:58:52 -0400 (0:00:00.161) 0:00:11.537 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029870\", \"end\": \"2018-06-22 08:58:53.560953\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:58:53.531083\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Friday 22 June 2018 04:58:53 -0400 (0:00:00.645) 0:00:12.183 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Friday 22 June 2018 04:58:53 -0400 (0:00:00.043) 0:00:12.226 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Friday 22 June 2018 04:58:53 -0400 (0:00:00.045) 0:00:12.271 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Friday 22 June 2018 04:58:53 -0400 (0:00:00.043) 0:00:12.315 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.028308\", \"end\": \"2018-06-22 08:58:54.239037\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:58:54.210729\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.544) 0:00:12.859 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.053) 0:00:12.913 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.056) 0:00:12.970 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.047) 0:00:13.018 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.048) 0:00:13.066 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.045) 0:00:13.111 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.044) 0:00:13.156 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.044) 0:00:13.201 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.042) 0:00:13.243 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.043) 0:00:13.287 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.045) 0:00:13.332 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.043) 0:00:13.375 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.042) 0:00:13.417 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.042) 0:00:13.459 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.041) 0:00:13.501 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.045) 0:00:13.546 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Friday 22 June 2018 04:58:54 -0400 (0:00:00.044) 0:00:13.591 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Friday 22 June 2018 04:58:55 -0400 (0:00:00.042) 0:00:13.633 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Friday 22 June 2018 04:58:55 -0400 (0:00:00.045) 0:00:13.679 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Friday 22 June 2018 04:58:55 -0400 (0:00:00.044) 0:00:13.723 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Friday 22 June 2018 04:58:55 -0400 (0:00:00.047) 0:00:13.770 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Friday 22 June 2018 04:58:55 -0400 (0:00:00.046) 0:00:13.817 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Friday 22 June 2018 04:58:55 -0400 (0:00:00.045) 0:00:13.863 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Friday 22 June 2018 04:58:55 -0400 (0:00:00.045) 0:00:13.909 *********** ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Friday 22 June 2018 04:58:55 -0400 (0:00:00.600) 0:00:14.509 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Friday 22 June 2018 04:58:56 -0400 (0:00:00.249) 0:00:14.759 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Friday 22 June 2018 04:58:56 -0400 (0:00:00.072) 0:00:14.831 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Friday 22 June 2018 04:58:56 -0400 (0:00:00.069) 0:00:14.901 *********** ", "ok: [controller-0 -> 192.168.24.12] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Friday 22 June 2018 04:58:56 -0400 (0:00:00.133) 0:00:15.034 *********** ", "ok: [controller-0 -> 192.168.24.12] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.031808\", \"end\": \"2018-06-22 08:58:56.972239\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-06-22 08:58:56.940431\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check if /var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Friday 22 June 2018 04:58:56 -0400 (0:00:00.566) 0:00:15.600 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.194) 0:00:15.795 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.052) 0:00:15.847 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.294) 0:00:16.142 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.044) 0:00:16.186 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.068) 0:00:16.255 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.048) 0:00:16.304 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.048) 0:00:16.352 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.045) 0:00:16.398 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.039) 0:00:16.438 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.071) 0:00:16.509 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.037) 0:00:16.547 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Friday 22 June 2018 04:58:57 -0400 (0:00:00.069) 0:00:16.617 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Friday 22 June 2018 04:58:58 -0400 (0:00:00.072) 0:00:16.689 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Friday 22 June 2018 04:58:58 -0400 (0:00:00.071) 0:00:16.760 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Friday 22 June 2018 04:58:58 -0400 (0:00:00.045) 0:00:16.806 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Friday 22 June 2018 04:58:58 -0400 (0:00:00.048) 0:00:16.855 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for Debian based system] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Friday 22 June 2018 04:58:58 -0400 (0:00:00.045) 0:00:16.900 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for Red Hat based system] **************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Friday 22 June 2018 04:58:58 -0400 (0:00:00.048) 0:00:16.949 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for Red Hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Friday 22 June 2018 04:58:58 -0400 (0:00:00.048) 0:00:16.997 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : check if selinux is enabled] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Friday 22 June 2018 04:58:58 -0400 (0:00:00.074) 0:00:17.072 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"getenforce\"], \"delta\": \"0:00:00.003692\", \"end\": \"2018-06-22 08:58:58.970770\", \"rc\": 0, \"start\": \"2018-06-22 08:58:58.967078\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Enforcing\", \"stdout_lines\": [\"Enforcing\"]}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Friday 22 June 2018 04:58:58 -0400 (0:00:00.521) 0:00:17.593 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Friday 22 June 2018 04:58:59 -0400 (0:00:00.043) 0:00:17.637 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Friday 22 June 2018 04:58:59 -0400 (0:00:00.057) 0:00:17.694 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Friday 22 June 2018 04:58:59 -0400 (0:00:00.049) 0:00:17.743 *********** ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Friday 22 June 2018 04:59:00 -0400 (0:00:00.967) 0:00:18.711 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Friday 22 June 2018 04:59:00 -0400 (0:00:00.073) 0:00:18.784 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Friday 22 June 2018 04:59:00 -0400 (0:00:00.039) 0:00:18.824 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.028573\", \"end\": \"2018-06-22 08:59:00.739996\", \"rc\": 0, \"start\": \"2018-06-22 08:59:00.711423\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Friday 22 June 2018 04:59:00 -0400 (0:00:00.542) 0:00:19.366 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Friday 22 June 2018 04:59:00 -0400 (0:00:00.069) 0:00:19.436 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029386\", \"end\": \"2018-06-22 08:59:01.362399\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:59:01.333013\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Friday 22 June 2018 04:59:01 -0400 (0:00:00.551) 0:00:19.987 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Friday 22 June 2018 04:59:01 -0400 (0:00:00.081) 0:00:20.068 *********** ", "ok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Friday 22 June 2018 04:59:01 -0400 (0:00:00.120) 0:00:20.189 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Friday 22 June 2018 04:59:01 -0400 (0:00:00.082) 0:00:20.272 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Friday 22 June 2018 04:59:01 -0400 (0:00:00.088) 0:00:20.361 *********** ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Friday 22 June 2018 04:59:02 -0400 (0:00:01.179) 0:00:21.541 *********** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.248) 0:00:21.790 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.043) 0:00:21.833 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.039) 0:00:21.873 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.045) 0:00:21.919 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.044) 0:00:21.963 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.047) 0:00:22.010 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.050) 0:00:22.061 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.042) 0:00:22.104 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.044) 0:00:22.148 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.047) 0:00:22.195 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.041) 0:00:22.237 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.043) 0:00:22.280 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.142) 0:00:22.422 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.052) 0:00:22.475 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.046) 0:00:22.521 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.042) 0:00:22.564 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Friday 22 June 2018 04:59:03 -0400 (0:00:00.049) 0:00:22.614 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.048) 0:00:22.663 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.043) 0:00:22.706 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.041) 0:00:22.748 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.046) 0:00:22.794 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.042) 0:00:22.837 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.044) 0:00:22.881 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.045) 0:00:22.927 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.043) 0:00:22.970 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.046) 0:00:23.017 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.043) 0:00:23.060 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.048) 0:00:23.108 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.050) 0:00:23.159 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Friday 22 June 2018 04:59:04 -0400 (0:00:00.045) 0:00:23.204 *********** ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.088112\", \"end\": \"2018-06-22 08:59:21.173416\", \"rc\": 0, \"start\": \"2018-06-22 08:59:05.085304\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Friday 22 June 2018 04:59:21 -0400 (0:00:16.599) 0:00:39.804 *********** ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.031880\", \"end\": \"2018-06-22 08:59:21.735819\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:59:21.703939\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/4f400cee0cf5241b5500dfa5fc0ba4b0bf6b1d4756555cc7e07b19c2af9fb12b/diff:/var/lib/docker/overlay2/e7ceb7c5f142ee0ead3760ed5e37988896c004d9442b29434db4f7afb4c18364/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/4f400cee0cf5241b5500dfa5fc0ba4b0bf6b1d4756555cc7e07b19c2af9fb12b/diff:/var/lib/docker/overlay2/e7ceb7c5f142ee0ead3760ed5e37988896c004d9442b29434db4f7afb4c18364/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/4f331ea494c349b1dc25c0d4fb87b85b626b1d371ac0031cce8ddb5c48757818/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Friday 22 June 2018 04:59:21 -0400 (0:00:00.568) 0:00:40.373 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Friday 22 June 2018 04:59:21 -0400 (0:00:00.081) 0:00:40.454 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Friday 22 June 2018 04:59:21 -0400 (0:00:00.049) 0:00:40.503 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Friday 22 June 2018 04:59:21 -0400 (0:00:00.047) 0:00:40.551 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Friday 22 June 2018 04:59:21 -0400 (0:00:00.046) 0:00:40.598 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Friday 22 June 2018 04:59:22 -0400 (0:00:00.044) 0:00:40.642 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Friday 22 June 2018 04:59:22 -0400 (0:00:00.047) 0:00:40.689 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Friday 22 June 2018 04:59:22 -0400 (0:00:00.050) 0:00:40.740 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Friday 22 June 2018 04:59:22 -0400 (0:00:00.048) 0:00:40.788 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Friday 22 June 2018 04:59:22 -0400 (0:00:00.047) 0:00:40.836 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Friday 22 June 2018 04:59:22 -0400 (0:00:00.048) 0:00:40.884 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Friday 22 June 2018 04:59:22 -0400 (0:00:00.046) 0:00:40.931 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Friday 22 June 2018 04:59:22 -0400 (0:00:00.053) 0:00:40.984 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.550892\", \"end\": \"2018-06-22 08:59:23.429552\", \"rc\": 0, \"start\": \"2018-06-22 08:59:22.878660\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Friday 22 June 2018 04:59:23 -0400 (0:00:01.072) 0:00:42.057 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Friday 22 June 2018 04:59:23 -0400 (0:00:00.073) 0:00:42.130 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Friday 22 June 2018 04:59:23 -0400 (0:00:00.047) 0:00:42.177 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Friday 22 June 2018 04:59:23 -0400 (0:00:00.044) 0:00:42.222 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Friday 22 June 2018 04:59:23 -0400 (0:00:00.165) 0:00:42.388 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Friday 22 June 2018 04:59:23 -0400 (0:00:00.046) 0:00:42.434 *********** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Friday 22 June 2018 04:59:26 -0400 (0:00:02.353) 0:00:44.788 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Friday 22 June 2018 04:59:26 -0400 (0:00:00.050) 0:00:44.839 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Friday 22 June 2018 04:59:26 -0400 (0:00:00.049) 0:00:44.888 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Friday 22 June 2018 04:59:26 -0400 (0:00:00.198) 0:00:45.086 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Friday 22 June 2018 04:59:26 -0400 (0:00:00.049) 0:00:45.135 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Friday 22 June 2018 04:59:26 -0400 (0:00:00.043) 0:00:45.179 *********** ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Friday 22 June 2018 04:59:27 -0400 (0:00:00.622) 0:00:45.801 *********** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"41c6f67e44237551a124af9a3133eb853ff83536\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"cb7deae369635e38668226c253d90026\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 664, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657967.33-194501377654424/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Friday 22 June 2018 04:59:30 -0400 (0:00:03.297) 0:00:49.099 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.047) 0:00:49.147 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.073) 0:00:49.220 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : make sure pg num is set for cephfs pools] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:10", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.051) 0:00:49.271 *********** ", "skipping: [controller-0] => (item={u'name': u'cephfs_data', u'pgs': u''}) => {\"changed\": false, \"item\": {\"name\": \"cephfs_data\", \"pgs\": \"\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'name': u'cephfs_metadata', u'pgs': u''}) => {\"changed\": false, \"item\": {\"name\": \"cephfs_metadata\", \"pgs\": \"\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate monitor initial keyring] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.060) 0:00:49.331 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : read monitor initial keyring if it already exists] ************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.051) 0:00:49.383 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create monitor initial keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.043) 0:00:49.426 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set initial monitor key permissions] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.044) 0:00:49.471 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create (and fix ownership of) monitor directory] **************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.047) 0:00:49.518 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.042) 0:00:49.561 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63", "Friday 22 June 2018 04:59:30 -0400 (0:00:00.044) 0:00:49.605 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create custom admin keyring] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.052) 0:00:49.658 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set ownership of admin keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:49.703 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : import admin keyring into mon keyring] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.044) 0:00:49.747 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs with keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:49.793 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs without keyring] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.044) 0:00:49.837 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.049) 0:00:49.887 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add ceph-mon systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:49.932 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : start the monitor service] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.044) 0:00:49.977 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : enable the ceph-mon.target service] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:50.023 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : include ceph_keys.yml] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.045) 0:00:50.068 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : collect all the pools] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.043) 0:00:50.111 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : secure the cluster] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.044) 0:00:50.156 *********** ", "", "TASK [ceph-mon : set_fact ceph_config_keys] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.047) 0:00:50.204 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : register rbd bootstrap key] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:12", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.172) 0:00:50.376 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:18", "Friday 22 June 2018 04:59:31 -0400 (0:00:00.170) 0:00:50.547 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:23", "Friday 22 June 2018 04:59:32 -0400 (0:00:00.174) 0:00:50.721 *********** ", "ok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-mon : set_fact ceph_mgr_keys convert mgr keys to an array] **********", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:31", "Friday 22 June 2018 04:59:32 -0400 (0:00:00.269) 0:00:50.990 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:37", "Friday 22 June 2018 04:59:32 -0400 (0:00:00.078) 0:00:51.069 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : stat for ceph config and keys] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:43", "Friday 22 June 2018 04:59:32 -0400 (0:00:00.079) 0:00:51.149 *********** ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-mon : try to copy ceph keys] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:54", "Friday 22 June 2018 04:59:33 -0400 (0:00:01.147) 0:00:52.296 *********** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : try to copy ceph config] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:68", "Friday 22 June 2018 04:59:33 -0400 (0:00:00.149) 0:00:52.446 *********** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set selinux permissions] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:83", "Friday 22 June 2018 04:59:33 -0400 (0:00:00.141) 0:00:52.588 *********** ", "ok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"cmd\": \"chcon -Rt svirt_sandbox_file_t /etc/ceph\", \"delta\": \"0:00:00.005636\", \"end\": \"2018-06-22 08:59:34.512006\", \"item\": \"/etc/ceph\", \"rc\": 0, \"start\": \"2018-06-22 08:59:34.506370\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [controller-0] => (item=/var/lib/ceph) => {\"changed\": false, \"cmd\": \"chcon -Rt svirt_sandbox_file_t /var/lib/ceph\", \"delta\": \"0:00:00.005283\", \"end\": \"2018-06-22 08:59:34.956030\", \"item\": \"/var/lib/ceph\", \"rc\": 0, \"start\": \"2018-06-22 08:59:34.950747\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-mon : populate kv_store with default ceph.conf] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2", "Friday 22 June 2018 04:59:34 -0400 (0:00:00.986) 0:00:53.574 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with custom ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18", "Friday 22 June 2018 04:59:35 -0400 (0:00:00.053) 0:00:53.627 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : delete populate-kv-store docker] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36", "Friday 22 June 2018 04:59:35 -0400 (0:00:00.051) 0:00:53.679 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43", "Friday 22 June 2018 04:59:35 -0400 (0:00:00.045) 0:00:53.725 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"389416528a79daff9f46e3b26fae4605355acb8e\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"9619471e5e05d96278e92bd89fa78172\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 794, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657975.15-190521990697669/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : systemd start mon container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54", "Friday 22 June 2018 04:59:37 -0400 (0:00:02.878) 0:00:56.604 *********** ", "ok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service basic.target system-ceph\\\\x5cx2dmon.slice systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph -v /etc/ceph:/etc/ceph -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.11 -e CLUSTER=ceph -e FSID=53912472-747b-11e8-95a3-5254003d7dcb -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mon : configure ceph profile.d aliases] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2", "Friday 22 June 2018 04:59:38 -0400 (0:00:00.957) 0:00:57.561 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529657978.98-217864557131037/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : wait for monitor socket to exist] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12", "Friday 22 June 2018 04:59:41 -0400 (0:00:02.897) 0:01:00.458 *********** ", "changed: [controller-0] => {\"attempts\": 1, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.089936\", \"end\": \"2018-06-22 08:59:42.437535\", \"rc\": 0, \"start\": \"2018-06-22 08:59:42.347599\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 33h/51d\\tInode: 96802491 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-06-22 08:59:40.510805580 +0000\\nModify: 2018-06-22 08:59:40.510805580 +0000\\nChange: 2018-06-22 08:59:40.510805580 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 33h/51d\\tInode: 96802491 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-06-22 08:59:40.510805580 +0000\", \"Modify: 2018-06-22 08:59:40.510805580 +0000\", \"Change: 2018-06-22 08:59:40.510805580 +0000\", \" Birth: -\"]}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19", "Friday 22 June 2018 04:59:42 -0400 (0:00:00.608) 0:01:01.067 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29", "Friday 22 June 2018 04:59:42 -0400 (0:00:00.086) 0:01:01.154 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39", "Friday 22 June 2018 04:59:42 -0400 (0:00:00.085) 0:01:01.240 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.11\"], \"delta\": \"0:00:00.181632\", \"end\": \"2018-06-22 08:59:43.399533\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 08:59:43.217901\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49", "Friday 22 June 2018 04:59:43 -0400 (0:00:00.784) 0:01:02.024 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59", "Friday 22 June 2018 04:59:43 -0400 (0:00:00.052) 0:01:02.077 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69", "Friday 22 June 2018 04:59:43 -0400 (0:00:00.053) 0:01:02.131 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : push ceph files to the ansible server] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2", "Friday 22 June 2018 04:59:43 -0400 (0:00:00.055) 0:01:02.186 *********** ", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"c11ffdd0583ec38685e73e9e23806b1ab1796f83\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"8521e5c370de28dd196a5547e194df11\", \"remote_checksum\": \"c11ffdd0583ec38685e73e9e23806b1ab1796f83\", \"remote_md5sum\": null}", "failed: [controller-0] (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"msg\": \"file not found: /etc/ceph/monmap-ceph\"}", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"92ab17613a97ae7cc44b346dae569b9621cfa58c\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"b7f24f6f4dd819f255a696a198039699\", \"remote_checksum\": \"92ab17613a97ae7cc44b346dae569b9621cfa58c\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"7bc2403b825b6dc38c05550b562edae21a8a427c\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"aae8af10d53876571689885523700ac5\", \"remote_checksum\": \"7bc2403b825b6dc38c05550b562edae21a8a427c\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"7fcd04c801012ba2ce36cd691ad1f9d78e601ce7\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"7cf75a1195ec902e4c83476d421838bd\", \"remote_checksum\": \"7fcd04c801012ba2ce36cd691ad1f9d78e601ce7\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"a9c3167c61374bc5b2700218dc7da7ef3586d25a\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"e813210a29326b4c8491bb41f44b3ac9\", \"remote_checksum\": \"a9c3167c61374bc5b2700218dc7da7ef3586d25a\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"bc3d34502a665ea005f6da312409602a2e167f1b\", \"dest\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"a185e79c17052dc0692c1fedd879676c\", \"remote_checksum\": \"bc3d34502a665ea005f6da312409602a2e167f1b\", \"remote_md5sum\": null}", "failed: [controller-0] (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/e23ce103-130e-4361-9f46-19e0f9010cc4/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"msg\": \"file not found: /etc/ceph/ceph.mgr.controller-0.keyring\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Friday 22 June 2018 04:59:47 -0400 (0:00:03.892) 0:01:06.079 *********** ", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.079 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.079 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.080 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.080 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.081 *********** ", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.081 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.081 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.082 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.083 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.083 *********** ", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.083 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.084 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.084 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.084 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.085 *********** ", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.085 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.086 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.086 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.086 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.087 *********** ", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.087 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.087 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.088 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.088 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.088 *********** ", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.089 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.089 *********** ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.089 *********** ", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.000) 0:01:06.090 *********** ", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=3 changed=0 unreachable=0 failed=0 ", "compute-0 : ok=4 changed=0 unreachable=0 failed=0 ", "controller-0 : ok=54 changed=7 unreachable=0 failed=1 ", "", "", "INSTALLER STATUS ***************************************************************", "Install Ceph Monitor : In Progress (0:00:55)", "\tThis phase can be restarted by running: roles/ceph-mon/tasks/main.yml", "", "Friday 22 June 2018 04:59:47 -0400 (0:00:00.004) 0:01:06.095 *********** ", "=============================================================================== ", "ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 16.60s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----", "gather and delegate facts ----------------------------------------------- 7.00s", "/usr/share/ceph-ansible/site-docker.yml.sample:29 -----------------------------", "ceph-mon : push ceph files to the ansible server ------------------------ 3.89s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2 -------", "ceph-config : generate ceph.conf configuration file --------------------- 3.30s", "/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------", "gather facts ------------------------------------------------------------ 3.29s", "/usr/share/ceph-ansible/site-docker.yml.sample:24 -----------------------------", "ceph-mon : configure ceph profile.d aliases ----------------------------- 2.90s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2 ", "ceph-mon : generate systemd unit file ----------------------------------- 2.88s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43 ", "ceph-docker-common : create bootstrap directories ----------------------- 2.35s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2 -", "ceph-docker-common : stat for ceph config and keys ---------------------- 1.18s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30 -", "ceph-mon : stat for ceph config and keys -------------------------------- 1.15s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:43 -------", "ceph-docker-common : get ceph version ----------------------------------- 1.07s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84 ------------", "ceph-mon : set selinux permissions -------------------------------------- 0.99s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:83 -------", "ceph-docker-common : remove ceph udev rules ----------------------------- 0.97s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2 ", "ceph-mon : systemd start mon container ---------------------------------- 0.96s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54 ", "ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block --- 0.78s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39 ---------------", "check if it is atomic host ---------------------------------------------- 0.69s", "/usr/share/ceph-ansible/site-docker.yml.sample:37 -----------------------------", "ceph-defaults : check for a mon container ------------------------------- 0.65s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2 ", "ceph-config : ensure /etc/ceph exists ----------------------------------- 0.62s", "/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76 -------------------", "ceph-mon : wait for monitor socket to exist ----------------------------- 0.61s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12 ---------------", "ceph-defaults : check if it is atomic host ------------------------------ 0.60s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2 -----------------"]} >2018-06-22 04:59:47,686 p=11115 u=mistral | NO MORE HOSTS LEFT ************************************************************* >2018-06-22 04:59:47,687 p=11115 u=mistral | PLAY RECAP ********************************************************************* >2018-06-22 04:59:47,687 p=11115 u=mistral | ceph-0 : ok=87 changed=41 unreachable=0 failed=0 >2018-06-22 04:59:47,687 p=11115 u=mistral | compute-0 : ok=105 changed=43 unreachable=0 failed=0 >2018-06-22 04:59:47,687 p=11115 u=mistral | controller-0 : ok=146 changed=44 unreachable=0 failed=0 >2018-06-22 04:59:47,687 p=11115 u=mistral | undercloud : ok=20 changed=9 unreachable=0 failed=1
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1594176
: 1453682 |
1453734
|
1454318